In April 2021, the European Commission (EC) published a proposal for an EU regulatory framework on artificial intelligence (AI), which has since been criticized by actors from different backgrounds such as the Human Rights Watch, openDemocracy, TechCrunch, European Digital Rights (EDRi) and many more. This article will provide an overview of concerns by critics, particularly laying out why various EU civil society organizations (CSOs) and EDRi have claimed that the proposal would not effectively protect fundamental rights, if adopted. Focusing on giving an overview of suggestions from EDRi’s call, this article will both provide a few additional remarks alongside insights into the AI Act’s (AIA’s) regulations.
Failing ‘Social Safety’ – Proposal Fails to Protect Fundamental Rights
Both, TechCrunch, the Human Rights Watch (HRW), openDemocracy and various EU CSOs, which belong to the EDRi network agree that the EC’s proposal for an AIA falls short with regard to the protection of fundamental rights. Despite that the EC’s proposal draws attention to the fact that “[t]he use of AI with its specific characteristics (e.g. opacity, complexity, dependency on data, autonomous behaviour) can adversely affect a number of fundamental rights enshrined in the EU Charter of Fundamental Rights [CFR]”, it does not explicitly prohibit the implementation of “all AI systems posing an unacceptable risk to fundamental rights”, as 114 CSOs argue in a call led by EDRi. In order to effectively ensure the latter, so the call emphasizes, the following points must be taken into consideration by the EC:
- To “Remove the high threshold for manipulative and exploitative systems under Art. 5 (1)(a) and (b)”
Art. 5(1)(a) and (b) state that certain AI practices shall be prohibited, when they “[deploy] subliminal techniques beyond a person’s consciousness…[or exploit] any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person…in a manner that causes or is likely to cause that person or another person physical or psychological harm”. As the call by CSOs illustrates, the latter identical parts of these articles wrongly communicate the message that AI systems, which interfere with individual autonomy (i.e. ‘beyond a person’s consciousness’ /‘exploit’ vulnerabilities), can also leave no harm. As EDRi points out, infringing on individual autonomy itself should not be lawful, because it constitutes an act of “impermissible harm”. Beyond that, it could be added that these articles indirectly imply that, whether or not physical or psychological harm is done, depends person by person based on individual levels of resilience. This connotation in itself is problematic, because it follows the logic that the same violation shall be punished when it is applied to one person, but not when it is carried out against another. In other words, adopting this article would violate the notion of equal rights and equality before the law (CFR Art. 20).
- To “Expand the scope of Art. 5 (1)(b) to include a comprehensive set of vulnerabilities”
Art. 5(1)(b) mentions age, physical and mental disability as categories, which mirror the vulnerability of a particular group. However, CSOs have called for a more comprehensive understanding of vulnerability remarking that “any sensitive or protected characteristic, including but not limited to: age, gender and gender identity, racial or ethnic origin, health status, sexual orientation, sex characteristics, social or economic status, worker status, migration status, or disability” must receive attention. In particular the suggestion to include the framing “any sensitive or protected characteristic, including but not limited to” makes a lot of sense considering that vulnerabilities change over time. In any case, prevention is better than hindsight. Especially, in recent years, where racial discrimination, discrimination against refugees and migrants as well as genocides haven taken on ‘new’ forms, it is critical to avoid entrenching bias as well as gaps of data privacy and consumer protection. The latter is particularly important to prevent specific groups being targeted – not only to forbid the control their material behaviours, but also to ensure their safety and avoid individual and targeted threats (i.e. discrimination to limit access to society and basic services, terrorism, genocide etc.).
- To “Adapt the Art. 5 (1)(c) prohibition on social scoring to extend to the range of harmful social profiling practices currently used in the European context”
Article 5(c) of the EC’s proposal for an AIA prohibits “the placing on the market, putting into service or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following: (i) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected; (ii) detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity”. As EDRi argues, this prohibition should be extended to private actors and certain criteria such as ‘trustworthiness’ and ‘a certain period of time’ should be omitted. Whereas EDRi’s claim is certainly plausible, it might be particularly interesting to discuss how this could also impact the regulation of fintech companies in the EU. Especially, because digital lending might coincide with checks on trustworthiness, it would be interesting to observe how the AIA could prevent digital credit providers (DCPs) from collecting extra information to prevent both unequal access to digital credits and access based on social scoring.
- To “Extend the Art. 5 (1)(d) prohibition on remote biometric identification (RBI) to apply to all actors, not just law enforcement, as well as to both ‘real-time’ and ‘post’ uses“
According to the call, Art. 5(1)(d), which prohibits “the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for” certain determined objectives (i.e. “ the targeted search for victims of crime, including missing children”(i); “the prevention of a…threat to life or physical safety of natural persons or of a terrorist attack” (ii); “the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence…” (iii)), needs to also prohibit “putting [RBI systems] on the market/into service” when it is likely that the latter could be easily misused by different actors. Since the EDRi does not specify, when it is “reasonably foreseeable” that an RBI system can easily be exploited for criminal and other harmful purposes, their demand in this regard does set a certain standard for RBI system developers. Arguably, the latter captures well that security risks need to be tackled at their roots. Whereas the trading of RBI systems on illegal marketplaces cannot be avoided, it is an important step to make such systems unavailable to the average citizen. For this purpose it might also be needed to specify what ‘reasonably foreseeable’ refers to, setting strict legal rules and potentially demanding monitoring efforts as to how security breaches develop over time.
Next to the latter suggestion, the EDRi also demanded for broad exceptions in Art. 5(1)(d), Art. 5(2) and Art. 5(3) to be removed to comply with the “necessity and proportionality requirements” of the CFR. Among others, Art. 2 foresees that “[t]he use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement” is permissible considering what was mentioned in Art. 5(1)(d) and considering that two further criteria are met. The first criteria (i) relates to forecasting the “seriousness, probability and scale of harm”, which would apply if an RBI system would not be used. The second criteria (ii) relates to respecting and estimating the consequences of the use of an RBI system in a particular situation on the rights and freedoms of citizens. Whereas, in theory, the latter article might aim at the protection of citizens’ fundamental rights and freedoms, in practice, it is too vague. The same accounts for Art. 5(3), which holds that prior authorization “by a judicial authority or by an independent administrative authority of [a] Member State” is compulsory when utilizing RBI systems in publicly accessible spaces with this process being subject to national law except that a particular case concerns a “situation of urgency”. As the EDRi and member organizations have argued, indeed such a statement leaves much room for interpretation. Especially, because the latter phrasing might remind of the controversy of the right to self-defense in International Humanitarian Law (IHL), the EC should reconsider whether such exceptions would weaken state accountability.
- To “Prohibit [a range of] practices that pose an unacceptable risk to fundamental rights under Art. 5“
According to the EDRi and member organizations, Art. 5 contains a range of practices that are a threat to the protection of fundamental rights in the EU. Among these are the usage of emotional recognition systems and biometric categorisation systems, “AI physiognomy by using data about our bodies to make problematic inferences about personality, character, political and religious beliefs”, and the use of AI systems by either law enforcement and criminal justice or immigration enforcement authorities for the purpose of profiling or risk-assessing individuals. The EDRi-led call suggests to prohibit these practices explicitly, all of which directly or indirectly relate to profiling. According to Annex III, AI systems that (6)(e) are used by law enforcement authorities for predicting crimes or (6)(f) are used by the latter, among others, in the course of investigation, are considered high-risk AI systems. As Section 5.2.3 ‘High-Risk AI Systems’ emphasizes, “high-risk AI systems are permitted on the European market subject to compliance with certain mandatory requirements and an ex-ante conformity assessment”. The latter is particularly problematic, because the act of ‘profiling’ might conflict with “the fundamental agency of human beings to apply their gifts to thrive” provided that motives for profiling related to the expulsion of a particular group from the public sphere and the continuum of fundamental rights.
As Steward M. Patrick states in an article in the Council of Foreign Relations (CFR), the above-mentioned definition goes back to how Hannah Arendt and Marc Lagon understood human dignity. More concretely, Hannah Arendt “[understood] dignity as an attribute that somehow precedes or justifies human rights”, as John Douglas Macready explained in an article from 2019. As Macready reemphasizes, for Arendt “the dignity of human beings [is] based upon ‘the ability to think, speak, and act’”. An individuals’ “politico-linguistic existence”, which includes the ability to participate in politics, thus needs to be protected to allow for human dignity to also remain a central pillar of EU politics – especially considering that Art. 1 of the CFR states that “[h]uman dignity is inviolable”. Beyond that, AI systems possess a lot of flaws at the current stage, which means that, if, for instance, they were used for emotional recognition, there is no guarantee that they would even capture reality. The latter makes them particularly unsuited to be used in law enforcement considering that they would lead to discrimination in criminal law proceedings and undermine justice as well as the right to a fair trial (CFR Art. 47). Overall, it can hence be said that the concerns of the EDRi and member organizations should be taken seriously. If you want to contribute your own thoughts, do not hesitate to drop us a comment on LinkedIn!
Centurion Plus
Are you an investor or a start-up with an interest in innovating AI technologies and their use in Africa, Germany or another EU country? Then, we would be humbled to support you on the legal side! We employ legal experts with knowledge across various African jurisdictions and have long-standing experience with giving advice on topics such as labour and immigration, tax and customs, contracts and negotiations, corporate governance and compliance as well as data protection. Next to our African offices, we have an office in Germany since 2020. That is to prove, we have an international mind…You too? Then contact us today to collaborate!