Increasingly, organizations are using artificial intelligence (AI) to recognize emotions in people. However, emotion recognition involves controversial assumptions about emotions and their measurability. When it is deployed anyway, it brings risks and ethical questions. This is the conclusion of the Dutch Personal Data Authority (AP) in the new Report AI & Algorithms Netherlands (RAN).
Your voice used to analyze "your emotional state" during a customer service call. Your smartwatch that measures whether you are stressed. Or a chatbot that recognizes your emotions and can therefore respond more empathetically. More and more organizations are using emotion recognition with AI, because they think they can use it to improve products and services. For example, in marketing or customer contact, but also in public space and healthcare.
The AP looked at the use of emotion recognition with AI by customer service agents, in wearables (such as a smartwatch) and in language models. It found that it is not always clear how AI systems recognize emotions, and whether the outcomes are reliable. Despite the growth of these applications, people also do not always know that emotion recognition is being used. And based on what data it is being done.
The AP concludes that this type of application must be handled with great caution. Otherwise, there is risk of discrimination and curtailment of human autonomy and dignity.
'Emotions strongly touch your human autonomy. Therefore, if you want to recognize emotions, it has to be done very carefully and based on reliable technology. That is often not the case now,' said Aleid Wolfsen, chairman of the AP.
Many AI systems that claim to be able to recognize emotions are built on controversial assumptions. As a result, biometric traits - such as voice, facial expression or heart rhythm - are translated unsubtly into emotions.
'The idea that an emotion is experienced the same way by everyone is not true. Let alone that those emotions can be measured using biometrics,' Wolfsen said. 'Between cultures there can be major differences in how people experience, express and name emotions. There can also be differences between individuals, for example due to age. Furthermore, you cannot always translate emotions one-to-one. After all, a high heart rate is not always a sign of fear, and a loud voice is not always an expression of anger.'
Various applications of voice recognition will soon fall under specific AI regulations and must already comply with privacy laws such as the General Data Protection Regulation (GDPR). In education and the workplace, the use of AI systems for emotion recognition is already prohibited under the European AI regulation.
Whether this technology is desirable at all is another question. 'It is an ethical question whether as a society you find recognizing emotions with AI permissible,' says Wolfsen. 'This requires a social and democratic consideration: whether you want to use these systems and if so, in what form and for what purpose.'