Is the recognition of emotions in the workplace legitimate? (BLOG)

LOGO_EUSOCIALCIT_05
EUSOCIALCIT_website_NEWS_01

Is the recognition of emotions in the workplace legitimate? (BLOG)

blogger-2838945_1280

By Ana B. Muñoz Ruiz (UC3M)

A few days ago one of the first resolutions on the system of recognition of emotions was delivered in Europe. Specifically, the Hungarian data protection agency (hereafter, NAIH) reviewed the practices carried out by a bank over forty-five days using software to process voice signals based on artificial intelligence. This software analysed and evaluated the emotional states of clients and key words used in telephone calls. The purpose of this technology was to handle complaints, monitor the quality of phone calls and work, and also increase the efficiency of the employees. The results of this analysis were then stored together with the recordings of the calls and the data was used to classify the calls in priority order.

We can ask ourselves some questions based on this first European case: what is a system of recognition of emotions? To what extent is its use permitted in the workplace?

In this regard, the draft law to regulate artificial intelligence (AI) in the European Union of 21 April 2021 (hereafter, Artificial Intelligence Act [AIA]) defines the system of recognition of emotions as an AI system designed to detect or deduce the emotions of individuals based on their biometric data (article 3 (34) of the AIA). It could be said that this technology is a modern version of the polygraph or lie detector that could detect features of personality, feelings, mental health or the level of commitment of the employee to his/her company.

Some examples of this technological reality are activities that monitor cerebral activity and the emotions of employees carried out in some Chinese companies through wireless sensors placed on employees’ hats. These, combined with algorithms and artificial intelligence, detect changes in levels of anger, anxiety or mood.

We can also mention the smart goggles by Google that can frame a face and superimpose bars with indicators of different emotions through the Shore application that has been developed in Germany. These are added to by applications that analyse a person’s tone of voice in a conversation or the language used in an email.

There is a close link between systems to recognise emotions and the personal data of a worker in the sense that the former are fed with biometric data (facial expressions, cerebral activity, among others). This connection will represent legal consequences in applicable regulations in such a way that not only the AIA should be applied but also Regulation (UE) 2016/679 of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (GDPR).

The future artificial intelligence act takes into account the repercussions in the workplace of the emerging technology and prohibits some of the systems, while it classifies others that affect candidates applying for a job and workers as high risk (Whereas 36 of the AIA). This is the case because they may infringe basic rights in the Charter of Fundamental Rights of the European Union, for example, workers’ health and safety and demanding a high level of protection of these rights.

In line with the German proposal for a risk-based pyramid of criticality that I mentioned in a previous post, the AIA proposes a risk-based approach that distinguishes between uses of AI that generate 1) an unacceptable risk; 2) a high risk; and 3) a low or minimum risk.

Originally published in Spanish on El Foro del Labos.

© Photo: Image by Lucas Bieri from Pixabay.