top of page
argyrovisionfactor

Rules for high-risk and general-purpose AI systems in the EU AI Act

Updated: Oct 17



Publication date: 07.08.2024


The first comprehensive artificial intelligence regulation in the world, the European Artificial Intelligence Act (AI Act), entered into force on August 1, 2024. The AI Act is intended to provide safeguards to protect people's fundamental rights while ensuring that AI created and deployed in the EU is reliable. By establishing a unified internal market for AI throughout the EU, the law hopes to promote the use of this technology and foster an atmosphere that is conducive to investment and innovation.



In this article, we will take a look at high-risk AI systems, their rules and obligations, and general-purpose AI systems.


High-risk AI systems


AI systems are frequently classified as high risk if they seriously endanger people's health, safety, or fundamental rights, particularly by significantly affecting decision-making.


The AI Act's Annex III includes a list of AI systems that should always be regarded as high-risk.


●      Remote biometric identification systems that do not make decisions automatically.

●      If the AI System is intended to be used as a safety component of a product covered by certain EU legislation, or if the AI System is itself such a product. This includes, for example, toys, hobby crafts, lifts, medical devices, aircraft, motor vehicles, and other motorized and non-motorized vehicles.

●      Emotion recognition AI systems (except in the workplace and education, where it is prohibited).

●      AI systems used as safety components in the control and operation of critical digital infrastructure, road traffic, and utility services.

●      Education: In education, the use of AI systems in admissions, assessment of learning outcomes, assessment of teaching quality, and detection and monitoring of prohibited student behavior in exams.

●      Employment: In employment, the use of AI in selection, recruitment, promotion, termination, performance evaluation, and behavior monitoring.

●      Use of AI in assessing access to and use of essential private and public services.

●      Law enforcement: Use of AI for law enforcement purposes, e.g., crime victim forecast, polygraph, evidence reliability assessment, crime detection, and profiling.

●      AI in migration, asylum, and border management.

●      AI in justice and democratic processes (e.g., to determine the outcome of elections and referendums).


Obligations for providers, distributors, importers, and users of high-risk AI systems


Providers of high-risk AI systems have a wide range of obligations under the new EU AI Act. The main obligation is to establish, maintain, and document a risk management system.


As part of the risk management system, the following key actions must be taken by the AI system provider:


Risk assessment:

for high-risk AI systems, a risk assessment should be carried out and documented, and based on this risk assessment, appropriate risk mitigation measures should be applied, built in during design and development, and risks that cannot be mitigated/eliminated should be continuously monitored.


Testing:

high-risk AI systems should be tested regularly.


Adequate data quality:

reliable, representative, balanced, complete, error-free, and diverse data sets should be provided for training, validation, and testing of the AI system, taking into account the geographical, contextual, behavioral, and functional environment in which they operate, which should be protected by privacy measures and ensuring that the operation of the AI system is non-discriminatory, non-biased.


Technical documentation:

the provider of an AI system shall produce technical documentation before placing it on the market or putting it into service, with the mandatory content specified in the Regulation. Throughout the AI system's lifespan, the technical documentation has to be kept accurate.


Logging:

the provider of AI systems must continuously log the operation of the AI system and keep the automatically generated logs for at least 6 years.


Transparency and information:

the AI system provider shall design the AI system in such a way that its operation is transparent and shall provide instructions for the users of the AI system.


Human oversight:

the AI system provider must develop and design the AI system in such a way that human resources can continuously monitor its operation and the operation of the AI system can be stopped or interfered with by human intervention.


Accuracy, Robustness, and Security: 

High-risk AI systems must be designed and developed to achieve and consistently perform at appropriate levels of accuracy, stability, and cybersecurity throughout their lifecycle.


Quality management:

High-risk AI system providers are required to have a quality management system in place that is systematically and logically documented through written policies, procedures, and guidelines. 


Documentation:

The documentation of the AI system must be kept for 10 years.


Conformity:

Before being placed on the market or put into service, the AI system must undergo a conformity assessment procedure, an EU declaration of conformity must be issued, and the CE marking must be affixed to the packaging or documentation of the AI system.


Registration:

the service provider must register the high-risk AI system in the EU AI register.


Accessibility:

must also comply with EU accessibility requirements.


Importers of high-risk AI systems have only limited obligations; they have to check that the supplier has completed the conformity assessment, and prepared the technical documentation, CE marking, EU declaration of conformity, and instructions for use. They must indicate their trade name and trademark on the AI system, and keep the certificate, instructions for use, and EU declaration of conformity. 


Distributors of high-risk AI systems should check that the importer is indicated and that the EU Declaration of Conformity and instructions for use are available.


Users of high-risk AI systems have an obligation to use the AI system in accordance with the user manual, to provide human supervision during the operation of the AI system, to keep logs, and to ensure that input data is relevant, representative, and of good quality.

To meet the above requirements, the AI Act provides for 2 years. 


Specific requirements for certain high-risk AI models


In addition to the above requirements, additional disclosure obligations apply to virtual assistants, chatbots, personalized recommender systems, diagnostic tools, AI systems that generate synthetic content, AI systems that recognize emotions, and AI solutions that generate deep fakes.


General purpose AI system


GPAI refers to AI models that are trained on vast amounts of data by self-supervision at scale. These models can do a variety of diverse tasks with competence and may be linked to various downstream systems or applications.


Providers of AI models that can perform many tasks and integrate into many systems have specific obligations. For example, there are specific rules on the content of the technical documentation, specific information and documentation to be provided to those who wish to integrate the GPAI model into their application, a copyright policy for the content used to teach the model, and a detailed summary of the content used to teach the model.


 

You can also read about:

 

Reference List




23 views0 comments

Comentários


bottom of page