Manufacturing focus: a 'Smart Twin' of the factory
We built an AI layer on top of the customer's existing data foundation, using Databricks to build data pipelines, train models, and deploy them for inference using MLflow experiment tracking. Unity Catalog forms the data foundation, and prediction results flow into a manufacturing execution system (MES).
The core question we kept asking ourselves was: which steps in the manufacturing process actually tell us something meaningful about how the product will perform in the end? Answering that question took time and a lot of close collaboration with the engineers on the factory floor. Their domain knowledge was just as important as the data.
Once we identified the significant steps, we deployed AI models at each of them. So far, three models are in production:
-
Auto-encoder plus regression model that predicts product characteristics based on the product's specific shape.
-
Ensemble model that combines existing predictions with additional datapoints, improving prediction accuracy by as much as 25 percent.
-
Image analysis model that classifies defects. In our testing, it consistently outperformed visual inspection by trained operators, particularly on subtle surface anomalies.
At each step, a "check gate" is implemented on the prediction result. This allows us to follow each product dynamically throughout its manufacturing cycle, rather than waiting until the end to find out something went wrong.
Correct, scrap, or continue
With the results available in the MES tool, operators get direct and actionable feedback at every gate. Based on what the model finds, we recommend one of three things: perform corrective actions to bring the part within specification, scrap it if corrections won't be enough, or continue as planned if everything looks good.
This might sound simple, but the shift it creates on the factory floor is significant. Quality control moves from a single checkpoint at the end of the line to a continuous process. Each defective part caught early represents a predicted cost avoidance of $30,000, based on the material, labor, and rework costs our customer typically incurs at that stage. We currently detect multiple at-risk parts per month. Those numbers add up quickly.
Long-term tracking benefits
Another advantage of our method is the ability to track long-term performance and visualize it using the dashboarding tool Grafana. This makes it possible to monitor if any issues emerge in individual steps of the manufacturing process and directly respond to it. For example, if a factory machine operates outside its optimal state, the dashboard quickly detects the issue, which prevents costly downtime. Alerts are then sent to the factory operators for further investigation.
After Manufacturing: Cradle-to-Grave and Inline Tuning
The product's lifecycle doesn't end when it leaves the factory. Two areas we are actively developing reflect that reality.
The first is cradle-to-grave analysis. We are building models to predict product performance and longevity in the field, including how logistics factors like storage duration and transport affect quality. The goal is to prevent dead-on-arrival issues and trace field failures back to specific manufacturing conditions.
The second is inline tuning. By processing data collected while the product is in use, we get a clearer picture of how it behaves over time and whether it is still operating as intended. This helps extend product longevity and feeds insights back into the manufacturing process.
Toward a Digital twin
The work described above is not a finished product. It is a foundation. The ultimate goal, which we are building toward together with our customer, is a full digital twin of the product lifecycle. One that integrates all the models and data from every step, enables near-real-time control of the manufacturing process, and tightens manufacturing specifications based on what actually happens in the field.
We are not there yet. But every model we deploy, every gate we implement, brings us a step closer.