AI at the beer garden
Rebecca Johnson is an expert in artificial intelligence, working in corporate research at Siemens. She tells us about how she views this technology of the future, and explains why a digital companion at a beer garden would, of course, speak Bavarian.
Let’s go straight to technology, Rebecca: If you were to name five key terms that I should absolutely know if I want to talk about AI these days, what would they be?
That’s easy. The first one would be “neural network” – a kind of artificial brain. A natural brain consists of lots of individual neurons that are linked together by synapses. Very roughly speaking, the arrangement of the synapses determines everything a brain can do or knows. A neural network replicates neurons and synapses by mathematical means. My second term is “machine learning.” We speak of machine learning when systems draw conclusions from the data they’ve processed in the past, and adjust their behavior as a result – meaning they behave in ways that we call learning, when people do it. The most important form of machine learning is what's called “deep learning,” so that’s my third term. Then you should also know what an “industrial knowledge graph” is. Briefly, this is an approach for summarizing widely ramified knowledge and correlations in a structured way. Industrial knowledge graphs make it possible to build up an almost unlimited memory for AI systems. And last but not least, my fifth term: “digital companion”. That refers to an AI system that has been specially developed so that people will enjoy working with it.
Artificial intelligence is nothing new. The first approaches were published back in the middle of the last century, and in the nineties there was even a small AI wave. Why all the hype just now?
There are a lot of reasons. Processes have gotten better, computers are faster, clouds offer unlimited storage space, and so on. But the most important reason is the data: AI needs masses of data, and masses of data need AI.
Every AI system must first learn from thousands of training data. Otherwise it will still be amazingly stupid. Just think how funny it is when robot vacuums or lawnmowers go bumbling around. Thirty years ago it was still really difficult to get enough data. But the development of the Internet of Things (IoT) in particular has changed all that. Today there are massive amounts of data that contain lots of valuable information. But to get at that hidden information, we need to analyze the data. That’s something that classical programming – processing a clearly predefined algorithm – often can’t manage. But AI techniques can.
So what sort of things can AI learn?
A lot – to play chess, water flowers, paint pictures – more precisely, anything that can be described in a mathematical formula. But I have to deal here with a stubborn myth. It’s not enough just to program any old neural network, show it a few million images, and then it can do anything. Like any other software developer, the programmer of an AI system first needs to understand the system’s requirements and define a suitable architecture. In general, an AI system consists of several layers, and each of them has a unique task. Say I want to develop an AI system that recognizes handwriting. So then I might define a layer that detects differences between black and white, a layer that recognizes loops, and so on. But if I, as the developer, forget to include something essential in that layer architecture, then no matter how long I train the system later, it still won’t work.
So an AI developer, like any softwaredeveloper, first has to analyze and understand the problem so as to derive andimplement an architecture from it. And then you’d still have what we might calla “stupid” AI system that will then have to be trained with data until ityields reliable results?
Exactly. In principle – thanks to IoT – there is enough data. But sometimes they’re hard to get hold of. Let's assume an AI system is supposed to detect malignant changes in tissue images. Then you first have to train it with lots and lots of examples of healthy and sick tissue – preferably millions of them. But in medicine particularly, data privacy rules are very strict, and even anonymized images are hard to get. So how is the system supposed to learn? Obviously protecting people’s personal privacy and data is very important, but we’ll need to talk about what data should be shared in the future, for the good of everyone.
Are challenges like that ingetting hold of the right data just a problem in medicine?
Absolutely not. At Siemens, for example, we’re sitting on a real hoard of data treasure – product data, development data, sales data, order data, and more, from more than 150 years – valuable information that can set us apart from other companies. Of course we don’t share our treasury of experience with people outside the company. But so far we’re still having trouble even internally with making the most of our data’s value. In this case it’s not data privacy that’s the problem, but the information is distributed all over the place in documents, databases, floppy disks, and so on. So with support from top management, we’ve now set up a big project to use an industrial knowledge graph to create a kind of corporate memory, so nothing gets forgotten.
What’s the situation at Siemens? How significant do you think AI is?
It’s essential! We’re a digitalization company in all our segments. Without AI we might very soon be shut out from among the market leaders. In the future, for example, if a lot of distributed energy suppliers want to work together efficiently with a city’s intelligent infrastructure, then we’ll have to understand the masses of data that are constantly generated by sensors, distributed computers and so on. AI methods will be needed just as much to do that as in a factory where robots work together autonomously. That’s why artificial intelligence is one of our most important research focuses, with over 450 individual projects – ranging from developing new algorithms to the user experience. And we’re well along in the process. Just recently, statistics on patent applications in AI have been released (note: Report from WIPO) – as you know, patent applications are always an indicator of how actively a company is researching in a given field. And in important subsegments like life and medical sciences, energy management, and physical sciences and engineering, we’re in first or second place.
Which of the many AI research projects is your favorite?
Definitely our work on the digital companion. I already mentioned that digital companions are AI systems with which people like to work – they relieve us of tiresome tasks, make useful suggestions, and in the best case adjust personally to their users, while leaving the decision-making autonomy to humans. At the moment we’re concentrating on how people and digital companions can communicate together. The most intuitive way for people is talking. Understanding speech – natural language processing – is a real challenge for AI systems, if only because of all the different accents and dialects. We’re working on a digital companion that you can talk with normally, not just in a typical chatbot tone of command. But of course an ideal digital companion also has to adjust its own speech output so it doesn’t sound as synthetic as a classic chatbot.
In other words, if digital companionsare ever going to show up in a beer garden, they’d better speak Bavarian?
Well, they’ll need to in Munich, anyway. And then they’d have to grumble occasionally too, like true Müncheners, and bang their beer mugs on the table. 😊
Subscribe to our Newsletter
Stay up to date at all times: everything you need to know about electrification, automation, and digitalization.