Artificial intelligence and the use of algorithms to manage work (BLOG)


Artificial intelligence and the use of algorithms to manage work: the dehumanisation of workers (BLOG)

BLOG on dehumanisation of workers

By Ana B. Muñoz Ruiz (UC3M)

Some time ago a Course on the Digital Work Environment and the Prevention of Workplace Hazards organised by the Basque Institute of Safety and Health at Work (Osalan) was held, in which I intervened as a speaker. I said that the reality shows us that both the Administration and private companies use algorithms. An example is SyRI (System Risk Indication) in the Netherlands to predict the probability of fraud in the Social Security system and to the tax authorities by benefit applicants. In private companies we have, among others, chatbot Phai, created by PredictiveHire, which poses a series of open questions for candidates about a job offer. It then analyses their answers to detect personality characteristics related to work such as dynamism, initiative and flexibility. On its website, Phai promises fairer hiring procedures and emphasises that it performs the interviews very quickly, and is inclusive and unbiased. But, what are the limits to algorithms? How can a worker defend him/herself against a decision taken by an algorithm?

It seems clear that the right to privacy and the protection of personal data is a limit on the use of algorithms. This is stated in the Sentence by the District Court of the Hague published on 5 February 2020 on SyRI, indicating that there is a special responsibility in the use of emerging technologies, concluding that the use of SyRI represents non-compliance with article 8 of the European Convention of Human Rights (right to respect for one’s private and family life) due to the lack of transparency and biased use of this instrument, used exclusively in neighbourhoods with low-income residents or areas where people from ethnic minorities live. In the labour field, however, other rights such as workers’ rights to safety and health should be taken into account.

The emerging technology not only enables the worker to have a robot colleague, the boss could also be an algorithm (Mercader, 2017). In practice, the use of this technology represents a distribution of functions between a person and a smart machine (algorithm) to the extent that it is possible to only adopt measures through algorithms, with no margin for decision by the worker. The progressive loss of functions and autonomy of the worker, and the continuous monitoring of his/her production can lead to serious risks for the worker’s health such as depression, anxiety or stress, as a result of feeling relegated to the status of a machine in the production process. This is even more the case when the machines do not rest, urging the worker to extend their working day with soft messages like: “there are still exciting tasks assigned to you, are you sure you want to stop now?” (Todolí, 2020). It is true that this trend will be less evident in posts where team management qualities are important, such as empathy, leadership, and conflict resolution (e.g. Workplace Hazard Prevention Technicians or Health and Safety Managers). These qualities are difficult for smart machines to learn.  Even so, these new risk factors should be considered by the employer in risk assessments, and preventive measures should be put in place to reduce their impact on employees’ health.

Predictive algorithms can carry out ergonomic assessments of the risk of physical stress to the worker/user of the tool, through the development of sensors in their clothes that collect information in real time on the person’s health, based on an analysis of the parameters examined: posture, loads, times and physiological state (heart rate, body temperature, etc.) (See H2020 Bionic Project). However, the possibility of false positives has to be taken into account (i.e. when the worker is erroneously graded with a risk of physical stress) and false negatives (when the worker is graded as free of physical stress but the risk exists) and that these systems give preference to certain values over others (general gains over specific losses). For example, checks should be made to see if the algorithm prioritises production over a worker’s basic right to safety and health. In this sense, Recommendation CM/Rec (2020)1 dated 8 April 2020 of the Committee of Ministers to member States on the human rights impacts of algorithmic systems, which urges Member States to adopt a cautious approach and calls for the rejection of certain systems when their deployment involves a high risk of irreversible harm, or when -due to their opacity- human control and supervision become unworkable in line with the German pyramid of criticality based on the risk mentioned above (See. Should algorithms be regulated? A brief analysis of the German legislative proposal: the criticality pyramid based on risk). The term ‘high risk’ is applied when referring to the use of algorithmic systems in processes or decisions that can have serious consequences for people in situations where a lack of alternatives generates a particularly high probability of infringement of human rights, even introducing or extending distributive injustice.

The business owner is obliged to prevent accidents, but can also predict them with the support of machine learning (the algorithm learns from experience to perform a task with precision) and other specialities such as deep learning (the algorithm carries out comprehensive learning, i.e. unprocessed data are provided for the task to be performed and the system learns how to do it automatically, simulating human thinking). An example of this is the construction company Suffolk in the United States, which uses a deep learning algorithm that has been ‘trained’ with images of construction sites and records of accidents in the workplace. It is then activated to monitor a new site and alert about situations that could lead to an accident, such as when a worker does not wear gloves or works too close to a hazardous machine. In the use of this technology it is advisable to avoid the design of systems of a “dead man’s switch” nature and always give the option that a human operator can ignore the algorithm at any given time, applying a procedure to situations in which this way of acting is appropriate.

While the data entered into a system may be neutral and representative, the combination of different types of data can lead to discriminatory effects. This is stated in the recent Guide approved by the CNIL (French Data Protection Agency) on Algorithms and Artificial Intelligence dated 2 June 2020 which concludes that automated systems “tend to stigmatise members of social groups that are already underprivileged and subjected”. From a preventive perspective, immigrant, disabled and female workers are potential victims of algorithms due to their vulnerability. The document proposes sanctions for people who apply discriminatory decision, and that recommendations of the following nature should be introduced: training and awareness creation among professionals who create and use algorithmic systems; support for research to carry out studies on biases and methodologies to prevent them; impose stricter obligations to transparency that reinforce the need to explain the logic behind algorithms (and allow third parties, not just those affected by an automated decision, to have access to the criteria used by algorithms); and impact evaluation studies to anticipate the discriminatory effects of algorithms. As we see, the French guide involves more guarantees than those envisaged in article 22 on decisions purely based on the automated processing of data in European Regulation 2016/679, on the protection of natural persons with regard to the processing of personal data and on the free movement of such data.

Finally, we would point out that in the court case of SyRI one of the plaintiff parties was the Dutch Trade Union Confederation. Indeed, it seems advisable that social stakeholders and collective bargaining parties should take part in the control and use of algorithms, particularly when these can affect workers’ safety and health. In this respect, it is a good start that some trade union documents should introduce affirmations of the following nature: “We need to open up, in a collective manner, the debate on the control of algorithms, the regulation of their use, the setting of limits, public and/or company records, or the negotiation of their use” [CCOO, Balance and proposals for collective bargaining, 2019]


Originally published in Spanish on El Foro del Labos.