On July 12, 2024, the European Union’s Artificial Intelligence Regulation (the “Regulation”), also known as the Artificial Intelligence Act (AI Act), was published in the Official Journal of the European Union and came into effect on August 1, 2024. It is a legislative proposal developed by the European Commission to regulate the use of artificial intelligence (AI) within the European Union (EU).
When discussing AI, it’s inevitable to recall numerous futuristic films and literature portraying a dystopian world governed by intelligent machines that surpass and subjugate humanity. In contrast to these popular fears reflected in cinema and novels, the enactment of the Regulation represents a realistic and proactive approach to avoiding such a scenario. Rather than leaving the development of AI uncontrolled, the Regulation establishes a regulatory framework designed to supervise and guide its development, ensuring that fundamental human and societal rights are respected. By regulating AI responsibly, the EU aims to foster innovation in a safe and ethical environment, preventing abuse during the rapid development of AI, thus contrasting with the notion of a future where it becomes a threat to humanity.
I. Objective
The goal is to establish a legal framework that ensures safety, transparency, and respect for fundamental rights in the development and use of AI systems.
According to Article 1 of the Regulation: “The objective of this Regulation is to improve the functioning of the internal market and promote the adoption of human-centered and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, and fundamental rights enshrined in the Charter of Fundamental Rights, particularly democracy, the rule of law, and environmental protection, against the harmful effects of artificial intelligence systems (‘AI systems’) within the Union, as well as supporting innovation.”
From the definition of its legislative goal, a multitude of elements explain the extensive and ambitious scope of the initiative and the care it takes in favor of humanity.
Its primary objectives are to:
- Establish a regulatory framework that promotes AI innovation, allowing European companies and developers to compete at the EU and global level, adhering to high ethical and security standards on an equal footing.
- Protect European citizens in such a way that AI systems do not compromise physical and emotional security or violate fundamental rights guaranteed by EU or local laws, such as discrimination and/or privacy.
- Require AI systems to be transparent in their functioning and decision-making processes. This entails the obligation for developers to comply with strict guidelines regarding development, market placement, use, and post-evaluation, as well as enabling users to question and report potentially negative outcomes.
II. Scope
Its scope applies within the EU to: providers who introduce AI systems to the market; those responsible for deploying AI systems established or located in the EU or a third country when the information generated by the AI system is used within the EU; manufacturers, importers, and distributors of AI systems; authorized representatives of providers not established in the Union; and affected persons located within the EU.
Conversely, the Regulation does not apply in areas outside the scope of EU law. It also does not affect the competencies of Member States in matters of national security and is without prejudice to rules established by other EU legal acts relating to consumer protection and product safety, among other fundamental rights.
It is noteworthy that the Regulation does not apply to AI systems developed and operated exclusively for scientific research and development purposes. It also does not apply to AI systems introduced to the market, deployed, or used, with or without modifications, exclusively for military, defense, or national security purposes, regardless of the entity conducting these activities.
III. AI System Classification – Varied Risks
For the purposes of the Regulation, an “AI system” is defined as “a machine-based system designed to operate with varying degrees of autonomy, which can demonstrate adaptive capabilities after deployment and, for explicit or implicit purposes, infer from the input information it receives the way to generate output information, such as predictions, content, recommendations, or decisions, which can influence physical or virtual environments.”
The Regulation classifies AI systems into four levels of risk. In summary, “unacceptable” risk AI systems are directly prohibited. The Regulation focuses on “high-risk” AI systems, which are strictly regulated. The regulation on “limited-risk” AI systems is much lighter, mainly subject to transparency obligations. Finally, “minimal-risk” AI systems are not regulated.
III.I Unacceptable Risk – Prohibited AI Practices
It is prohibited to market, deploy, or use an AI system that:
-
- Manipulates human behavior to alter individual or group actions. This includes AI systems that generate: (a) Subliminal manipulation techniques that go beyond human consciousness. (b) Exploitation of a vulnerability of a person or specific group based on their age, disability, or specific social or economic situation.
- Socially score a person or group with the aim of: (a) Evaluating or classifying them based on their behavior or personal characteristics in a way that generates detrimental or unfavorable treatment, unrelated to the contexts in which the data were originally generated or collected, or is unjustified or disproportionate to their social behavior or its severity. (b) Evaluating or predicting their criminal intent based on their profile, traits, and characteristics (certain exceptions apply).
- Promotes real-time facial recognition in public spaces to: (a) Infer a person’s emotions in their workplace and/or study environment (except for specific medical or security reasons). (b) Create or expand facial recognition databases by indiscriminately extracting facial images from the internet or closed-circuit television. (c) Classify individuals and allow “deductions or inferences about their race, political opinions, union membership, religious or philosophical beliefs, sexual life or sexual orientation…” This prohibition does not apply to legally acquired biometric data. (d) Enforce the law, except when strictly necessary for the identification and selective search of kidnapping victims, missing persons, human trafficking, or persons potentially subject to criminal prosecution with sentences of at least four years in prison. Its use is subject to guarantees of the individual’s fundamental rights under local law and, in some cases, prior or subsequent judicial authorization.
III.II High Risk
The classification of an AI system as “high risk” is limited to those systems that have a significant harmful effect on the health, safety, and fundamental rights of individuals in the EU. The Regulation focuses on these “high-risk” AI systems by regulating them so that they are reliable, robust, accurate, and maintain a degree of cybersecurity that ensures consistent performance throughout their lifecycle.
The magnitude of harm that this category of system can cause is directly proportional to the impact on fundamental rights set out in the Charter of Fundamental Rights of the EU (“Charter”). These include the right to and respect for human dignity, private and family life; freedom of assembly, association, expression, and information; consumer and environmental protection; gender equality; workers’ and disabled persons’ rights; the right to judicial protection and impartiality, and the presumption of innocence; the right to health and education, as well as children’s rights as enshrined in the Charter and the UN Convention on the Rights of the Child.
This category covers the following areas:
(A) Critical infrastructures
These are AI systems used as safety components in the management and operation of digital infrastructures such as water, gas, heating, electricity, and/or transportation. The malfunction of such systems could materially disrupt social and economic activities and endanger the lives and health of large groups of people.
(B) Education and vocational training
These are AI systems used in education and professional training that, through learning assessments, determine admissions to certain institutions and/or jobs. These systems could affect the right to education and vocational training, as well as discriminate against individuals based on age, disability, beliefs, race, religion, or sexual orientation.
(C) Employment and workforce management
These are AI systems used in hiring and staff selection, as well as in assigning tasks, evaluating performance, and determining job continuity. They can affect fundamental rights such as the right to work, to a decent livelihood, to personal data protection, and to privacy.
(D) Essential services and benefits.
These are AI systems that allow authorities to grant, reduce, extend, or revoke access to essential services and benefits, both public and private. Examples include health care, social security, maternity protection, work-related illness benefits, housing assistance, pensions, and retirement, where individuals are often highly vulnerable in relation to authorities.
These systems could have a material impact on people’s livelihoods while affecting fundamental rights such as the right to social protection, non-discrimination, and human dignity.
Also considered under this category are systems used to assess individuals’ creditworthiness or solvency, as these have a direct impact on financial inclusion and any area where an individual’s economy plays a role.
AI systems used for these purposes can discriminate against certain people or groups and perpetuate historical patterns of discrimination, such as on grounds of race, ethnicity, gender, disability, age, or sexual orientation, or generate new forms of discrimination.
Finally, this area includes systems that manage and classify emergency calls and decide on the dispatch of assistance in emergency situations—decisions critical to the lives and health of people and their properties.
III.III Limited Risk
Limited-risk AI systems are those that present a moderate risk to users and society. These systems do not require as strict regulation as high-risk systems, but they are subject to specific obligations to ensure transparency and safety. This includes the requirement that the user must be informed during the first interaction that they are interacting with an AI system and/or that content, images, or text have been artificially generated. A classic example is a chatbot service, where the user must be aware they are not interacting with a human.
To this end, the relevant authority will facilitate the development of good practice codes at the EU level to warn users about the artificial origin of the pertinent content. These systems do not require a compliance evaluation before they enter the market, but developers and operators are expected to adopt measures to mitigate any residual risk and ensure safe and responsible use.
III.IV Minimal Risk
AI systems that present a low risk, such as most AI applications in video games, do not require specific regulatory intervention. These minimal-risk applications, including many that are available in the EU single market, such as AI-driven video games and spam filters, are not subject to direct regulation.
Although not specifically mentioned as a category in a separate article, it can be inferred from the regulatory framework that systems falling outside the other categories are considered minimal risk. However, the fact that the Regulation does not impose specific obligations for AI applications not falling under higher-risk categories does not exempt developers from the need to manage and address associated legal risks. These may include issues related to intellectual property, data protection, confidentiality, trade secrets, cybersecurity, consumer protection, labor law, civil and criminal law, among others.
IV. Obligations of High-Risk AI System Providers
The Regulation strictly outlines the obligations of providers of high-risk AI systems, focusing on risk management, data quality, technical documentation, transparency, and continuous monitoring.
(A) Risk Management System
Providers must implement a continuous risk management system throughout the system’s lifecycle. This system requires periodic and systematic reviews and updates, as well as quality and functionality tests. It must identify known and foreseeable risks (to health, safety, or fundamental rights) both when used for its intended purpose and when used in “reasonably foreseeable” misuse scenarios, either before or after marketing. The system must also include measures to mitigate these risks.
(B) Data and Data Governance
High-quality and relevant data must be used to train, validate, and test AI systems, ensuring their integrity and accuracy. All aspects related to design decisions, data origin, collection, cleansing, cataloging, and purposes must be preserved and made available. This includes assumptions formulation, examination of potential discriminatory “biases” and measures to mitigate or eliminate them, and detection of information gaps.
(C) Technical Documentation
Providers must maintain detailed technical documentation of their AI systems to ensure transparency and traceability. The documentation must be clear and comprehensive enough for the oversight authority to determine compliance with applicable regulations. If the provider is a “small or medium-sized enterprise” (SME), they may simplify the process by completing certain predefined forms.
(D) Activity Logs
Providers must keep automatic logs of AI system activities to monitor its operation and detect potential irregularities. Specifically, these logs must track risk situations, post-marketing monitoring, and system performance surveillance.
(E) Transparency and Information Provision
Providers must ensure that users (including those who market the systems) understand how the AI system works and its limitations by providing clear and accessible information in the form of user instructions.
(F) Human Oversight
High-risk AI systems must be designed and developed to be subject to human oversight throughout their usage. The provider must equip the systems with suitable human-machine interface tools. Those responsible for oversight must understand the system’s capabilities and limitations and be able to monitor its operation, for instance, to detect, contain, and resolve potential anomalous behaviors.
V. Obligations of General-Purpose AI Model Providers
The Regulation also focuses on “AI models.”
Essentially, an AI model is a specific component that performs data processing and generates results, such as predictions, recommendations, or decisions. It is an algorithm trained with data to perform certain tasks but does not constitute a complete system on its own. It is the “engine” that powers the functioning of an AI system.
In other words, an AI model is an algorithmic component that performs data processing tasks, while an AI system is the complete application that uses one or more AI models to interact with users or environments and is regulated based on its risk.
Providers of General-Purpose AI Models must comply with certain procedures, including:
- Preparing Technical Documentation. This includes documentation related to the training and testing processes and the results of evaluations, which must be updated and made available to other AI providers who may integrate it into other systems.
- Establishing Protocols for Intellectual Property Compliance. Providers must ensure adherence to EU intellectual property legislation.
- Public Summary for Model Training. Providers must make a summary of the AI model’s training available to the public.
When an AI model involves “systemic risk” (greater impact due to specific parameters), providers of General-Purpose AI Models with systemic risk must also: - Conduct assessments and tests of models to identify and mitigate potential systemic risks, including their origins, and document serious incidents and potential remedial measures, notifying the relevant authority.
- Ensure an adequate level of cybersecurity protection.
VI. Governance
The Regulation designates several authorities responsible for implementing and overseeing its provisions. These authorities work together to ensure that AI systems are developed and used safely and in compliance with European regulations. They include:
European Artificial Intelligence Office
This entity was created to support the implementation, interpretation, and supervision of the Regulation at the EU level. Its purpose is to provide centralized and coordinated oversight of the Regulation, ensuring uniform and effective application across the EU. Key functions include coordinating activities with national competent authorities, providing technical and legal advice to national authorities, companies, and other stakeholders, collecting information to improve the Regulation and its mechanisms, fostering cooperation among member states, and overseeing AI systems marketed in the EU.
National Authorities
Each EU member state must designate one or more national authorities to oversee the Regulation’s implementation within its territory. “Notified bodies” are accredited entities designated by national competent authorities to conduct conformity assessments of high-risk AI systems.
European Artificial Intelligence Committee
This committee ensures consistency in the application of the Regulation across the EU. It is composed of representatives from national authorities and the European Commission and is responsible for coordinating and supporting the Regulation’s implementation, as well as issuing guidelines and recommendations.
European Commission
The Commission supervises the overall application of the Regulation and has the authority to intervene in cross-border cases.
Finally, a “consultative forum” will be established to provide technical knowledge and advice to the Committee and the Commission, along with an “independent scientific expert group” to support compliance activities outlined in the Regulation.
VII. Sanctions
The Regulation establishes a sanctions regime with fines. Each Member State must ensure the enforcement of these fines and guarantee due process. There are mitigating and aggravating circumstances, as well as certain protections for SMEs due to their status.
The fines aim to provide a deterrent effect and are categorized as follows:
- Up to €35 million or 7% of the annual turnover, whichever is higher, for violations of the Regulation’s prohibitions (unacceptable risk AI systems).
- Up to €15 million or 3% of the annual turnover, whichever is higher, for violations of the Regulation’s obligations (other than prohibitions) by providers, importers, distributors, and users.
- Up to €7.5 million or 1% of the annual turnover, whichever is higher, for providing incorrect information to the competent national authorities.
Moreover, the imposition of these fines may also result in the prohibition of the use of the offending company’s AI system, the revocation of certifications and licenses, and individuals responsible for such violations being subject to civil or criminal actions for damages.
VIII. Deadlines
The Regulation sets out an extensive timeline for its entry into force and application (https://artificialintelligenceact.eu/es/implementation-timeline/), allowing all parties to adjust their actions accordingly. This also applies to each Member State, which must allocate resources, legislate, and coordinate their activities.
The Regulation was published in the Official Journal of the EU on July 12, 2024, and came into effect on August 1.
From its entry into force, various deadlines are set for the gradual implementation of different provisions. The most relevant deadlines are:
- Six months (February 2, 2025): Prohibitions on various AI systems (Chapters 1 and 2 of the Regulation) come into effect.
- Twelve months (August 2, 2025): Chapters related to competent authorities, general use models, governance, confidentiality, and sanctions come into effect.
- Twenty-four months (August 2, 2026): The Regulation generally applies, except for certain provisions on high-risk AI systems, which will come into effect after thirty-six months (August 2, 2027).
- Twenty-four months (August 2, 2026): Member States must have established national AI regulations and ensured an adequate mechanism for cooperation among different authorities.
IX. Conclusions
The Regulation represents a crucial advance in creating a coherent, uniform, and safe legal framework for the development and application of AI in the EU. By focusing on a risk-based approach, it ensures that the most critical applications concerning their societal impact are subject to appropriate oversight while fostering innovation in lower-risk areas. Implementing this regulation is essential to balancing technological progress with the protection of fundamental rights.
The existence of the Regulation already has a significant impact on the practice of European lawyers advising businesses, particularly in areas related to technology, regulation, and rights protection. By extension, it is expected that the same will occur within the profession at the national level.
Training will be key to advising clients on:
- The interpretation of the Regulation’s obligations (in our case, under the relevant local legislation), especially regarding the classification and risk management of AI systems.
- The review and update of contracts related to the development, purchase, and use of AI systems.
- Advising on data collection, storage, and processing to ensure compliance with data protection, security, and privacy regulations.
- Analyzing potential liabilities and legal contingencies, and developing strategies to minimize conflicts arising from misuse or failures of AI systems.
- Strategies to encourage innovation and competitiveness within the legal framework.
- Understanding new obligations and best practices under the applicable regulations, ensuring that all parties involved understand their responsibilities and the impact of AI use.
In summary, the enactment of the Regulation (and future applicable local legislation) demands but also presents an opportunity for lawyers to deepen their understanding of the specific technical and regulatory aspects of AI, and to strengthen ties with clients as key advisors in risk management, AI compliance, and business structuring related to AI.
________________________________________
Artificial Intelligence – Europe “in the Pole Position” of its Regulation
By Luis H. Vizioli (*)
Published in Abogados.com.ar, Buenos Aires, Argentina on Sep. 12, 2024.
(https://abogados.com.ar/inteligencia-artificial-europa-en-la-pole-positon-de-su-regulacion/35463)
[Translated by ChatGPT]
________________________________________
(Link to the Regulation):
European Parliament Document
Artificial Intelligence Act Website
(*) Luis Hernán Vizioli is a lawyer registered in New York State (1997) and the Buenos Aires, Argentina, Public Bar Association (1992). He has worked in law firms in Buenos Aires, Argentina; São Paulo, Brazil; and New York, USA. He graduated from the Faculty of Law and Social Sciences at the University of Buenos Aires (1991). He obtained an LLM from the University of Illinois, Urbana-Champaign, USA (1994) and a postgraduate degree in Telecommunications, Broadcasting, and Media Law from UBA (2002). He was a Visiting Researcher at Florida International University, Miami, USA (2016). He is a Partner at Vizioli & Triolo Attorneys