On the other hand, Siemens showed a research project where artificial intelligence was used to aid architects (of automotive and aerospace systems) - by helping them choose among large numbers of system configurations. Given the goals of a product, and the large variability of parts (batteries, engines, drive trains, etc.) it becomes very hard for humans to discover and assess all options. With the new generation of PLM tools, it becomes possible to have the computer calculate through the options, assess the quality of the solution with well-defined KPIs and help the architect choose the best option. When asked how creative the system was, it turned out that the architects defined quite strict rules in order to prevent surprises in the outcome. We are definitely not ready to have the AI take over from us yet.
Can artificial intelligence be ethically acceptable?
With systems becoming smarter and moving into new areas of human life, there are quite some safety and ethical topics that need answers as well. Some are on the technological side, but many are addressing the ethical side of how acceptable it is to let systems take over.
Many ‘intelligent’ systems use techniques that make it very hard to explain what’s going on inside (black box). Neural networks or deep learning techniques cannot show the reasoning behind the actions they take. This is becoming a larger problem, not only for human adoption (we want to know ‘why’?) but also for legislators. For example, how can we approve a self-driving car on our roads if we can’t really understand how it decides to act in emergency situation?
In the industry it is also apparent that cultural difference drive difference in approach to such challenges. Google, an American company, tries to pursue the challenges by focusing on the large numbers, by training the systems with a lot of data from the real world. Mercedes, a Germany company, takes a more engineering-based approach and tries to engineer their way out of it.
Do intelligent systems shape us toward new ways of engagement?
The final keynote of the day showed the challenges of large corporations stepping into the new world of intelligent systems. Henk van Houten, CTO of Philips Royal, showed the healthcare business strategy. Where in the past Philips itself built all parts of the products sold, now they are more an integrator working in an ecosystem of products, systems and data.
Philips takes a very pro-active approach to building the ecosystem they need to deliver intelligent services to their customers, mainly hospitals. That’s the only way to fully contribute to the goals of the healthcare system: lowering costs, improving quality of the product, and most importantly: the quality of life of patients.
This requires working on new rules of engagement with respect to the development of these systems. Data is the main driver, but also very challenging: who owns the data, who shares it, how to get data in a central place to learn from it, and how to distribute it to bring the intelligence to the people who need it.
This is a challenge we see at many of our customers. It’s not just about building a product, a cloud environment, or a smart model. The rules of engagement are just as important to keep a competitive edge. Technology can help, but also needs to give answers on questions about who owns the data, how it is secured, how decisions are being made and how the quality towards end-users can be guaranteed.