Biblioteca PGR

Analítico de Periódico

Humpty Dumpty and high-risk AI systems : the ratione materiae dimension of the Proposal for an EU Artificial Intelligence Act / Jérôme De Cooman
Market and Competition Law Review, v.6 n.1 (April 2022), p.49-88


On 21 April 2021, the European Commission proposed a Regulation laying down harmonised rules on artificial intelligence (hereafter, “AI”). This so-called Artificial Intelligence Act (hereafter, the “Proposal”) is based on European values and fundamental rights. Far from appearing ex nihilo, it deeply relies on the European Ethics Guidelines proposed by the independent high-level expert group on AI set up by the European Commission. In addition, the European Commission’s White Paper on AI already called for a regulatory framework that should concentrate on a riskbased approach to AI regulation that minimises potential material and immaterial harms. The Proposal, internal market oriented, sets up a risk-based approach to AI regulation that distinguishes unacceptable, high, specific and non-high-risks. First, AI systems that produce unacceptable risk are prohibited by default. Second, those that generate high risk are subject to compliance with mandatory requirements such as transparency and human oversight. Third, AI systems interacting with natural persons have to respect transparency obligations. Fourth, developers and users of nonhigh- risk AI systems should voluntarily endorse requirements for high-risk AI systems. Exploring the origins of the AI Act and the ratione materiae dimension of the Proposal, this article argues that the very choice of what is a high-risk AI system and the astounding complexity of this definition are open to criticism. Rather than a fullextent analysis of the Proposal’s requirements, the article focuses on the definition of what is an unacceptable, high, specific and non-high-risk AI system. With a strong emphasis on high-risk AI systems, the Commission comes dangerously close to the pitfall this article humorously labels as the Humpty Dumpty fallacy, to pay tribute to the nineteenth century English author Lewis Carroll. Just because the Commission exhaustively enumerates high-risk AI systems does not mean the residual category displays non-high-risk. To support this argument, this article introduces recommender systems for consumers and competition law enforcement authorities. Neither of these two examples fall under the AI Act scope of application. Yet, the issues they have raised might well be qualified as high-risk in a different context. In addition, the AI Act, although inapplicable, could have provided a solution.