MultiRotorResearch - Live Streaming (Android) & Wildlife Detection (Computer Vision & AI)
Project description
Martin Tomov:
My research focuses on computer vision (object detection) in the wildlife domain, with an emphasis on synthetic data and model training. MRR's drones required an algorithm to detect wildlife, but due to the lack of data provided by stakeholders and the absence of quality public datasets, the project shifted toward generating image data synthetically.
Jarno Weemen:
My main research question was about how to implement live streaming into the MRR website, so that you can see the live view of the drone when it is flying.
Context
Martin Tomov:
I'm working with MRR to enhance wildlife detection using drones and computer vision. Due to a lack of quality datasets, the project focuses on generating synthetic data tailored to wildlife scenarios. I have trained multiple LoRA models on species like fawns and lapwing birds, optimizing detection algorithms for drone-specific use cases. This approach addresses data limitations and supports innovative applications in wildlife conservation.
Jarno Weemen:
The project is done to get live streaming to work during drone flights. This is initially done, so that you can manually stop a flight when an object is closing in too much and the drone's software won't spot it. But this type of live streaming can later be used for real time AI detection, so you can immediatly spot the targets you want to see.
Results
Martin Tomov:
The experiment demonstrated that a computer vision model trained on synthetic data can achieve high accuracy in wildlife detection, validating the approach. The dataset generation pipeline is fully operational, allowing for scalable and efficient creation of tailored synthetic datasets. The trained object detection model performs well, meeting project expectations. However, plans for a V2 model are underway to further enhance accuracy and better replicate real-world UAV conditions, ensuring even greater reliability in future applications.
Jarno Weemen:
I have made sure the Android gateway that sits between the drone and the cloud is better optimized. I also improved the upload and download speeds of images to allow live streaming to be implemented. Lastly I implemented the live streaming feature, so that you can view the live stream of multiple drone's at the same time during a flight.
About the project group
Our group is a combined team working on different assignments within MRR. Our collaboration involves providing feedback to each other on general progress, but the developments are separate and domain-specific.
Martin Tomov:
I'm an AI/ML engineer specializing in generative AI, synthetic data, and computer vision. I focus on creating innovative, scalable solutions that bridge advanced AI technologies with real-world applications.
Jarno Weemen:
I did software development in the previous semester. I worked 4 days a week on the project. I worked in sprints of 3 weeks and used the drone simulator to get the code to work, afterwards I let the code be tested in a real flight by my stakeholder.