Written by:  


Hackathon: YOLO (You Only Look Once) object detection

Research phase
The first part of this hackathon consisted of an object detection lecture & demo, which you can watch below, and a model research phase and initial model setup. In the lecture, we discuss the computer vision algorithms we’ve been using and take a deeper look at three components of object detection: data, model architecture, and training loop.


On October 12, we hosted the second part of the hackathon and continued with the object detection implementation based on what we researched in the first hackathon. 

Three teams competed to implement the best mobile-optimized fly detection algorithm, by training a model on 2000 labels spread across 34 images and two classes. The models were obtained from the Tensorflow model hub.  

The goal was to obtain the best accuracy across three test images. The teams presented their results by pitching their methodology, results, and choices they made. 

Some teams managed to predict the images using a pre-trained model but did not manage to train the model on the data. This resulted in their algorithms predicting birds and persons, rather than flies, which were the target classes. The third team did not manage to make any predictions.

Turns out that implementing object detection models in a matter of a few hours is not that easy after all. Do you think that you can do better? Download the dataset below if you want to try it yourself.

Another fun and educational hackathon at HQ. The goal of this hackathon was to research cutting-edge mobile-optimized object algorithms using our Pest Detection App data set. Note that if you’re not familiar with our Pest Detection App, you can read all about it in this blog.

pest detection download featured
Detecting pests through object detection
Download the dataset

backBack to overview

Want to stay updated?