Using existing models for image classification
Training a deep neural network on only 1000 images with a substantial class imbalance is hard toward impossible. That’s where transfer learning kicks in. Existing deep neural nets as developed by Google, Microsoft and many others are trained on large datasets. Like the ImageNet dataset. The trained models are available for anyone to use. Many layers in these models are extracting features from the images that can be used to distinguish any object from another. The last layers then translate these features into a class. The idea behind transfer learning is that the outcome of the feature extraction layers is just as useful for any class, also classes that the model is not yet trained on. In other words: features that are used to correctly classify, for instance, different types of space shuttles can also be used to distinguish a normal from an abnormal tomato seedling. So the only additional thing to do is to build your own trainable classification layers on top of the feature extraction layers, that learn to use the fixed features to classify your specific dataset.
The results are promising. After training our model, it classifies 89% of tomato seedlings it has never seen before correctly into normal and abnormal. However, while the precision of abnormals was 100%, the recall was only 47%. This means that every plant that is classified by the model as abnormal is indeed 100% an abnormal plant. However, for 47% of the abnormal plants the model incorrectly classifies them as normal.
Still, these results indicate that using transfer learning can be a viable approach for image classification. The extracted features clearly contain information about classes the model has not been trained on. For this specific use case, however, the accuracy on the abnormals is not good enough yet. To improve this, our next steps are to use other techniques for feature extraction, e.g. PlantCV. Also, we will collect much more images (a data set of 12.000 images, of which a substantial portion being abnormal) and to use Mechanical Turk for labeling activities.
Although more data will thus improve the accuracy of the automated classification, this use case shows that machine learning can be applied with more than reasonable results on relatively small datasets using smart techniques such as transfer learning.
For more interesting reads on how we can turn data into value, check out our other use cases.
Back to overview