On 21 April 2021, the EU Commission adopted a proposal for a regulation (the AI Act) on AI systems, described as “the first ever legal framework on AI.” The AI Act will impose significant changes on businesses across the EU and beyond in almost all sectors. Albeit being new, the AI Act has already raised controversial topics, in particular regarding its scope. In two blogs, we will highlight the most important key takeaways that AI users and providers must know at this stage.
What Is Artificial Intelligence Act?
It is a proposal for a regulation of the European Parliament and of the council
laying down harmonized rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts.
Regulations are legal acts that apply automatically and uniformly to all EU countries as soon as they enter into force, without needing to be transposed into national law. They are binding in their entirety on all EU countries.
The choice of a regulation as a legal instrument is justified by the need for a uniform application of the new rules, such as definition of AI, the prohibition of certain harmful AI-enabled practices and the classification of certain AI systems. This will reduce legal fragmentation and facilitate the development of a single market for lawful, safe and trustworthy AI systems.
Scope of application:
To WHOM Does It Apply?
Providers of the AI
Users of the AI
It does not apply to private uses.
WHERE Does It Apply?
This Act will be applied to both public and private users across the EU or outside of it, as long as the AI system’s use affects the people in the EU, or the AI system is used in the union market.
The extraterritorial impact of this Act will cause a so-called Brussel-Effect, which means that this Act will most likely become a ‘standard’ across the globe, as many international companies want to do business in the EU and so they have to adapt EU rules and regulations (similar to GDPR which modeled the data privacy rules of over 100 countries).
Photo credit here
WHEN Does It Apply?
Photo credit here
Why Was the Act Prepared in The First Place?
1. to ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values;
2. to ensure legal certainty to facilitate investment and innovation in AI;
3. to enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems; and
4. to facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.
There are 44 definitions in this Act but let us refer to a few of them which are most important at this stage.
The AI definition will most likely be subject to possible amendments in the future, but the Commission obviously intends to have such broad definition of AI, so that it includes not only AI systems offered as software products, but also statistical approaches and products that rely directly or indirectly on AI services.
This definition has been questioned from a technical perspective and it remains to be seen when the regulator amends the definition of AI in the future, as the current definition under Annex I only includes the existing technologies and not any probable future innovations.
One more important definition here is the ‘provider’. ‘Provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge.
On the other hand, a user is any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.
What About the Fundamental Rights?
This proposal seeks to ensure a high level of protection for fundamental rights. It will enhance and promote the protection of the rights protected by the Charter:
the right to human dignity (Article 1),
respect for private life and protection of personal data (Articles 7 and 8),
non-discrimination (Article 21) and equality between women and men (Article 23).
It aims to prevent a chilling effect on the rights to freedom of expression (Article 11)
and freedom of assembly (Article 12)
the rights of defence and the presumption of innocence (Articles 47 and 48),
the workers’ rights to fair and just working conditions (Article 31),
a high level of consumer protection (Article 28),
the rights of the child (Article 24)
and the integration of persons with disabilities (Article 26).
The right to a high level of environmental protection and the improvement of the quality of the environment (Article 37)
Title II establishes a list of prohibited AI. The regulation follows a risk-based approach, differentiating between uses of AI that create
(i) an unacceptable risk,
(ii) a high risk, and
(iii) low or minimal risk.
The list of prohibited practices in Title II comprises all those AI systems whose use is considered unacceptable as contravening European Union values, for instance by violating fundamental rights. The prohibitions cover practices that have a significant potential to manipulate persons through subliminal techniques beyond their consciousness or exploit vulnerabilities of specific vulnerable groups such as children or persons with disabilities in order to materially distort their behavior in a manner that is likely to cause them or another person psychological or physical harm. The proposal also prohibits AI-based social scoring for general purposes done by public authorities. Finally, the use of ‘real time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement is also prohibited unless certain limited exceptions apply.
Photo credit here
Hi-Risk AI Systems
Chapter III contains specific rules for AI systems that create a high risk to the health and safety or fundamental rights of natural persons. In line with a risk-based approach, those high-risk AI systems are permitted on the European market subject to compliance with certain mandatory requirements and an ex-ante (pre-event) conformity assessment. The classification of an AI system as high-risk is based on the intended purpose of the AI system, and modalities for which that system is used.
High-Risk AI Application
This list of high-risk AI systems in Annex III contains a limited number of AI systems whose risks have already materialized or are likely to materialize in the near future.
1. Biometric identification (AI used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons);
2. Critical infrastructure (e.g. energy and transport);
3. Education and vocational training (e.g. AI used for the purpose of determining access or assigning natural persons to educational and vocational training institutions);
4. Employment, workers management and access to self-employment (e.g. CV-screening and storing for recruiting purposes);
5. Essential private services and public services (e.g. AI evaluating the eligibility of natural persons for public assistance benefits and services such as grants and loans);
6. Law enforcement (e.g. evaluation of the reliability of evidence in the course of investigation or prosecution of criminal offences);
7. Migration, asylum and border control management (e.g. verification of the authenticity of travel documents); and
8. Administration of justice and democratic processes (e.g. AI systems intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts).
This blog has a second part which will be published shortly.