Do programmers have feelings 1

The man on the training bike tries hard and starts pedaling. Suddenly he grimaces. "You seem to be in pain," says his trainer, giving him a sympathetic look, "try to slow down." The man slows down, the coach smiles and says, "Much better." So far, so normal. But the trainer is not a person, but an avatar. He stands across from the athlete on a large screen and feels how he is doing. The project of the University of Augsburg in cooperation with the University Hospital in Ulm is one of many from the growing field of affective computing, a research area in computer science that aims to ensure that machines adapt to people and understand their feelings.

"Elderly people in particular are often terrified that they will get pain through exercise," says Elisabeth André from the University of Augsburg. Avoiding exercise because of this is not a good idea. The trainer on the screen should help seniors, for example, to find the right level. To do this, he interprets their facial expressions, but also the noises like a loud exhalation. In addition, the system measures skin conductance and pulse and also registers whether the user is under stress or is currently overloaded. The trainer adjusts his facial expression and his gestures and thus actually appears compassionate - although that cannot be a computer of course.

"That is the future," says Björn Schuller from the University of Augsburg. "Emotions are important because humans needed them to survive, so artificial intelligence will need them to survive too." Ideally, the machines should adapt in the same way as we humans do to one another. Incidentally, anyone who initially thinks of the USA when thinking of such visions is wrong: "Alongside the USA, Germany is a driving force in this field," says Schuller, who has been researching in this area for many years. The research direction is currently growing rapidly - and the challenges are slowly becoming apparent. Many projects focus on training in which people also learn how their emotions affect others. Similar to the trainer avatar, the conversation partner of a virtual job interview adapts to the applicant's mood.

Machines should also become receptive to subtle signals

It was a long journey to get to this point, says Patrick Gebhard from the German Research Center for Artificial Intelligence DFKI in Saarbrücken. It is not enough to deduce emotions from facial expressions alone. But many researchers rely exclusively on it. "Computer scientists are great pattern recognizers," says Gebhard, "but we also have to have a model that the patterns match." So it has to be possible to interpret the facial expression, and that requires contextual knowledge. The usual apps always interpret a smile as joy - but sometimes a smile also expresses malicious joy or shame. Of course, it can also be simply put on.

Gebhard and his colleagues have been working with psychologists for years on a model that is now stored in the system. It helps him to recognize whether the test person is, for example, smiling ashamed or happy during the interview. "You want to hide shame," says Gebhard. That is why this is a good example for examining various human regulation mechanisms and teaching a machine how to differentiate between them. After all, some people react to shameful situations by closing themselves off, while others talk their way out and still others attack. If the other person does not react appropriately in such situations, it can escalate. "A job interview is a prototypical situation in which I have to show: I am good," says Gebhard - at the same time, applicants are often confronted with questions that trigger shame, for example about their weaknesses.

The researchers first programmed the psychological model into their system, which then observed job interviews with test subjects and, based on their reactions, learned to interpret facial expressions and gestures. With initially human help, the system assigned these situations to one of the regulatory mechanisms and learned from it. This is important so that the avatar can adapt its reaction accordingly - this is the only way to create a conversation that feels natural and from which the test person learns. The system then uses all the information to calculate the feedback, for example: "At this point you smiled, but we didn't have any eye contact - that seems unsafe."

Emotisk, a training system that researchers at the Humboldt University in Berlin is developing with the Aachen and Cologne University Clinics, has a similar goal: in the long term, it should help autistic people to recognize the emotions of their fellow human beings and to send adapted non-verbal signals during the conversation. To do this, the software evaluates the direction of view and facial expression and gives the user feedback - here too, the avatar adapts to the mood.

Another large target group of emotion-sensitive robots are elderly or cognitively impaired people who, for example, can remain independent for longer with this kind of support. Researchers working with Stefan Kopp from Bielefeld University had to determine the hurdles that such systems encounter: "The speech recognition was simply overwhelmed by the long sentences of our test subjects." Your robot avatar should help users structure their day. To do this, the system must gently interrupt people using gestures without appearing impolite, and finally learn to recognize misunderstandings quickly. "He should notice as quickly as possible when the user is skeptical or when people and technology talk past each other," says Kopp. To do this, the scientists first consciously initiated communication problems so that the system learned to recognize them quickly based on the reaction of the human counterpart. "Humans are very sensitive to subtle signals, and machines should be able to do that too," says Kopp.

When the avatar didn't understand the emotions, an angry user threw the PC out of the window

What is often overlooked, however, is that the external shape of the robots is also important, especially in emotional relationships. "You don't want to have everything that is sweet around you all day long," says Marc Hassenzahl, Professor of Ubiquitous Design at the University of Siegen. So the child-like scheme has had its day. And the question of what the target group really wants is important to Hassenzahl: The series of experiments for the "Sympartner" project, which he is implementing together with the Essen workers' welfare organization and the TU Ilmenau, seems amusing at first glance. To do this, a person sits in a cardboard box - each with a different design - and plays the robot, while an actress plays through various scenes with it: from the greeting at the door (robot: "Hello, nice to see you back!") To the ins -Bed-bringing and morning wake-ups.

During these activities, the robot remains standing in the door to the bedroom. The group found out after many interviews in which the scenes with the actress were shown: "This is where the intimacy begins." The robot is now knocking on the door - even if it is open. Instead of just imitating people, it is also important to use what robots are particularly good at: "They have infinite patience - and you don't have to thank them," says Hassenzahl. When people are constantly dependent on the help of others, something like this can relieve them enormously on an emotional level. A robot has no ulterior motives, how reassuring.

"Socially sensitive and cooperative systems are the future," says Stefan Kopp from Bielefeld University. And this out of necessity: The DFKI researchers realized how important it is for machines to learn to adapt to people when they tried it without this ability. A previous project of the current job interview trainer was intended to support young people with social problems. "Only without an integrated emotion model," says Patrick Gebhard. One of the users apparently felt too cornered by the avatar, who confronted him again and again with unpleasant experiences, regardless of his emotional state. At some point the young man threw the monitor with the avatar out of the window.