26-11-2021 Should algorithms be regulated? A brief analysis of the German legislative proposal: the risk-based criticality pyramid (BLOG)

Should algorithms be regulated? A brief analysis of the German legislative proposal: the risk-based criticality pyramid (BLOG)

It has been said the world of work has been ‘algorithming’ itself in the sense that tasks are becoming algorithms and work is becoming automated (for more details, see In reality, what […] exactly is an algorithm?). In practice, the use of this technology represents a distribution of functions between a person and a smart machine (the algorithm), in the sense that it is only possible to adopt measures through algorithms, with no margin for decision by the person in charge. In Labour Law there are already some analyses that warn of the hazards of algorithms in personnel selection processes (Todolí) and of their use for the allocation of tasks and assessment of performance at work (Mercader), but they also tell us of their limitations if we compare them with workers’ capabilities (Beltrán de Heredia). Based on this scenario, the following question arises: Should algorithms be regulated? What could be the way to go?
Although there are some reports with recommendations on the subject, the most solid legislative proposal to date, in our opinion, is the one published a few days ago by the Ethics Committee on Data set up by the German Government (see long version and short version). In contrast to some reports that put the focus on national regulations (The Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT) 2018), the working document referred to above states that new European legislation on algorithmic systems is required, establishing general horizontal requirements that should be implemented through sectoral norms (among them, Labour Law).
For the law to be able to cover algorithms, we need certain flexibility and less straight-jacketing. In this regard, the German Committee recommends adopting a legislative approach based on risk, distinguishing five levels of criticality depending on probability and severity variables as a result of using algorithms. Measures only need to be regulated and adopted starting from level 2 upwards, as the potential harm of level 1 is equivalent to zero. At level 2 (where harm is possible), it is recommended to implement checks, with an obligation to make and publish an appropriate risk assessment and disseminate the information to supervisory bodies, or strengthen the obligations to transparency and rights of access of affected persons. The incorporation of authorisation procedures is justified by the algorithm-based applications at level 3, which can produce regular or major potential harm. Given that the applications envisaged in level 4 represent serious potential harm, these should be subject to stricter supervision. Finally, partial or full prohibition would be applied to algorithms that involve unsustainable harm (level 5). Although it is not expressly mentioned, risk level 5 is related to the Precautionary Principle that is applied to situations of scientific uncertainty and serious and irreversible harm. From our point of view, it would be advisable to also propose the revision of the Commission communication on the precautionary principle (COM(2000) 1 and incorporate the rejection of certain algorithmic systems when their implementation represents a high risk of irreversible harm, or when -due to their opacity- human control and supervision starts to become infeasible.
How can this proposal be related to European Regulations on Data Protection? The working document provides some answers to this question. It seems that the risks associated with algorithms still persist even when no data processing takes place. Hence the need for specific regulation of the area. We also propose a stronger implementation of the terms of art. 22 of the GDPR which forbids decisions simply based on automated data processing, except in certain situations. A well-known case is that of the Austrian Employment Agency, which since 2016 evaluates the job opportunities of candidates for a post with the help of an algorithm that gives a higher score to women, people with disabilities, and people over 30 years of age (see news item). In this respect, it is advisable to extend anti-discrimination legislation to provide coverage for specific situations in which the worker or candidate is discriminated against based on the automated analysis of data or automated decisions.
Given the technical complexity of algorithms and their rapid obsolescence, we propose the application of technical rules that complement aspects that escape the legislator’s control. The technical rules are designed by private entities (in our country, the Spanish Standards Association) and belong to the soft law (i.e. non-binding) family. Indeed, standardisation in the field of artificial intelligence is still at a very early stage. A little progress is being made, perhaps because we are waiting for the legislators to take the first steps.
Written by Ana B. Muñoz Ruiz (UC3M)
Originally published in Spanish on El Foro de Labos
© Photo: Fernando Arcos (retrieved from Pexels)