Seeing the Artificial Intelligence (AI) field moving rapidly, it is interesting to come together with industry peers to recognize what the common topics and challenges are at this moment. AI is really going through a renAIssance lately, more and more is possible, but at the same time it poses new challenges that we have not seen before.
Itility visited the Embedded Systems Institute (ESI) 2019 symposium ‘Intelligence, the next challenge in complexity’. It brought together five hundred professionals from the Brainport Eindhoven region working on architecture, systems and embedded software. The world is expecting these systems to become more intelligent, but what does that mean, and how do you engineer that? This blog summarizes some questions that should be considered - questions that are being asked at Itility and the customers we work with.
How intelligent can we make it - and how intelligent do we want to make it?
The day started with a keynote from Edward A. Lee - a professor in cyber physical systems from UC Berkeley. His keynote highlighted the challenges of building an ‘artificial’ intelligence. Often AI is equated to human intelligence. However, given our current knowledge of the human brain it seems preposterous that we can soon build an artificial intelligence that can mimic human intelligence. Brains are vastly more complex than anything we can simulate with computers. Realizing that it requires to define what we try to accomplish with our systems. A first good step is making systems more self-aware in the area that they are built for. Building a high-level feedback loop into a control system can already add a layer of value without it needing to be ‘natural intelligence’.
Architecting intelligent systems or intelligently architecting systems?
A break-out session focused on the role of architects in intelligent systems. On one hand architects from Océ and ASML showed the challenges posed by building intelligent systems. Architects are used to creating business value with actual products and services. Building a smart system will create a new level of value towards its users and customers. However - the ‘intelligent’ parts of the system also make the product less deterministic. This will pose challenges on qualities like: robustness, availability and quality. Architects are advised to add intelligence and self-awareness slowly to make the products manageable.
On the other hand, Siemens showed a research project where artificial intelligence was used to aid architects (of automotive and aerospace systems) - by helping them choose among large numbers of system configurations. Given the goals of a product, and the large variability of parts (batteries, engines, drive trains, etc.) it becomes very hard for humans to discover and assess all options. With the new generation of PLM tools, it becomes possible to have the computer calculate through the options, assess the quality of the solution with well-defined KPIs and help the architect choose the best option. When asked how creative the system was, it turned out that the architects defined quite strict rules in order to prevent surprises in the outcome. We are definitely not ready to have the AI take over from us yet.
Can artificial intelligence be ethically acceptable?
With systems becoming smarter and moving into new areas of human life, there are quite some safety and ethical topics that need answers as well. Some are on the technological side, but many are addressing the ethical side of how acceptable it is to let systems take over.
Many ‘intelligent’ systems use techniques that make it very hard to explain what’s going on inside (black box). Neural networks or deep learning techniques cannot show the reasoning behind the actions they take. This is becoming a larger problem, not only for human adoption (we want to know ‘why’?) but also for legislators. For example, how can we approve a self-driving car on our roads if we can’t really understand how it decides to act in emergency situation?
In the industry it is also apparent that cultural difference drive difference in approach to such challenges. Google, an American company, tries to pursue the challenges by focusing on the large numbers, by training the systems with a lot of data from the real world. Mercedes, a Germany company, takes a more engineering-based approach and tries to engineer their way out of it.
Do intelligent systems shape us toward new ways of engagement?
The final keynote of the day showed the challenges of large corporations stepping into the new world of intelligent systems. Henk van Houten, CTO of Philips Royal, showed the healthcare business strategy. Where in the past Philips itself built all parts of the products sold, now they are more an integrator working in an ecosystem of products, systems and data.
Philips takes a very pro-active approach to building the ecosystem they need to deliver intelligent services to their customers, mainly hospitals. That’s the only way to fully contribute to the goals of the healthcare system: lowering costs, improving quality of the product, and most importantly: the quality of life of patients.
This requires working on new rules of engagement with respect to the development of these systems. Data is the main driver, but also very challenging: who owns the data, who shares it, how to get data in a central place to learn from it, and how to distribute it to bring the intelligence to the people who need it.
This is a challenge we see at many of our customers. It’s not just about building a product, a cloud environment, or a smart model. The rules of engagement are just as important to keep a competitive edge. Technology can help, but also needs to give answers on questions about who owns the data, how it is secured, how decisions are being made and how the quality towards end-users can be guaranteed.