Written by:  

Frank van der Linden

Productizing AI – embedding your model

When productizing AI, there are numerous challenges you can encounter. For example: embedding AI (how to apply your AI model to a process or people?), stabilizing data and models (how to keep your model accurate in changing environments and over time?), scaling (how to scale up from 1 user to 1000?), and growing AI (how to grow or augment the capabilities of your AI model?). Let’s zoom in on one of the challenges: embedding.

Embedding AI
In our last blog, we touched on the idea that running a successful machine learning Proof of Concept (PoC) with your new algorithm is only 10% of the effort required to productize it and get actual value from it. The remaining 90% can be divided into things you need to do to make a usable product and things you need to do to make a useful product.

To make a usable product, you need to zoom in on the technical implementation of making the product available to your users.
To make it useful, you should look at embedding the product into a process for the users.

So, what is the difference between a PoC and a usable product?

  • PoCs are not meant for production, we cut out the slack to prove that we can do it on a snapshot basis.
    Products need to do it all the time, any time, and under shifting circumstances.
  • During your PoC, you find the data you are looking for, make a copy, and start to clean it up and analyze it.
    In production, your data source has to be connected to a data platform in real-time, safely and securely; the data stream has to be manipulated automatically and compared to/combined with other data sources.
  • During your PoC, you either have the luxury of being able to talk to your future users and work with them to design a solution, or you have no users at all, and you are designing a technical solution.
    For a product, you have users that need to understand that solution, and people responsible for keeping the technical solution running. Thus, a product requires training, FAQs, and/or support lines for it to be usable.
  • In your PoC, you just create a new version for your one use case.
    Products require updates, and when you have rolled out your product for multiple customers, you need a way to test and deploy your code for production (CI/CD pipelines).

At Itility, we’ve developed our Itility Data Factory and AI Factory that cover the building blocks and underlying platform for any of our projects. This means we have the usable angle covered from the start. So that we can focus on the useful angle (which is more customer and use case dependent).

Pest Detection App – from PoC to useable product
The Proof of Concept phase of our Pest Detection App consisted of a model that can perform the narrow task of classifying and counting flies on a glue trap based on images.
These images were taken by greenhouse team members. In case they missed a picture or if something went wrong, they could go back and take another, or directly fix it in the dashboard. And quite some manual checks were needed.

Our PoC-world was simple: based on one single device, one single user, and one single customer.

However, to make it into a useable product, we needed to scale and support multiple customers. Then, the question of how to keep data separated and secure arises. Moreover, each individual customer/machine requires a setup and default configuration. So, how to configure/set up 20 new customers? How do you know when to build an admin interface and automate onboarding? At 2 customers, 20, or 200?

From useable product to useful product
So, we proved we can count flies. Next up: tackling questions such as ‘how does counting flies help my customer? How to create value from this information? How to recommend decisions and take action? How does this AI application fit in the business process?’.

Step one is to change your frame of reference from a technical/data perspective to the end-user perspective. This means continuing the conversation with your customer and seeing how the proven PoC fits into daily processes.

But: talking is often not enough. You have to closely follow the process for a longer period of time, you need to join operational and tactical meetings to really understand what actions are taken every day based on which information, how much time is spent on doing what, and the reasoning behind certain actions.
Without understanding how the information from your model is used to create business value, you will not get to a useful product.

In our case, we discovered what information was used to make decisions. For example, we discovered that for some pests it was more important to follow the weekly trend (for which you don’t need super high accuracies) whilst others require action at the first sign of a pest (which means it’s better to have a couple of false positives than to have even one false negative).

Additionally, we discovered that our customer had previously had a “bad” experience with a similar tool claiming to have accuracies it could not hold true in practice. So why would they trust ours? We took this trust problem head-on and made accuracy and transparency a key feature of the product.

We used this information to make our product useful by adapting the application to the end user’s working methods, and by increasing transparency in the interaction, giving the user more control over the application.

What is the biggest challenge?
Embedding a new tool into an organization is difficult since a new tool brings change in the form of a new way of working.
Embedding an AI feature is tremendously difficult since an AI solution can also bring out a healthy dosage of mistrust.

In our fly-counting scenario, we can talk about our accuracy score all we want. However, to be useful, the user (a greenhouse specialist) needs more than percentages. What is needed is to experience it, and to learn to trust it. The worst thing that can happen is when your users compare your results with their own manual results and there is a (large) discrepancy. Your reputation is ruined and there is no room to regain trust.
We counteracted this by adding software to the product that encourages the user to look for those discrepancies and correct them.

Our AI-vision: an operator-centric approach
Our approach is thus to make the user part of the AI solution instead of presenting it as a system that is going to replace the specialist.
We turn the specialist into an operator. AI is augmenting their abilities and the specialists remain in control by continuously teaching and guiding the AI to learn more and make corrections when the environment or other variables drift. As an operator, the specialist is an integral part of the solution – teaching and training the AI with specific actions.

See the video below for more details on the operator-centric approach.

 

Conclusion
Running a successful PoC with your new AI algorithm is only 10% of the effort required to turn a machine learning model into a value-generating solution. The remaining 90% is needed to turn it into a useable (pipelines, security, scalability, FAQs, and how-tos) and useful product. The main take here is to look at the problem from a user perspective but above all: to make the user part of the solution.


backBack to overview

Want to stay updated?