EU AI Act: What You Need to Know

Published on March 25, 2024 by Donata Stroink-Skillrud in MainWP Blog under MainWP News, Privacy, WordPress Business
Heads up: This page may include affiliate links. Read the full disclaimer.
Featured Image for What You Need to Know about the EU AI Act which was passed on March 14, 2024.

On March 13, 2024, the European Parliament voted in favor of the European Union Artificial Intelligence Act (EU AI Act), a landmark regulation that imposes requirements for AI technologies. The Act sets forth a comprehensive framework for the development, use and marketing of AI and aims to support innovation while mitigating the harmful effect of AI systems in the European Union. The Act reaches these goals by classifying AI according to its risk, placing obligations on developers and deployers of AI tools, and even prohibiting certain AI systems from being launched into use. In this article, we will break down the main points of the EU AI Act so you can learn more about this regulation and how it affects the use of AI tools. 

Who does the EU AI Act regulate? 

The EU AI Act regulates the providers, importers and distributors of AI systems or general-purpose AI models that are used in the European Union. It is important to note that the Act applies to these providers, importers and distributors, even if they are not located in the European Union, as long as the AI systems or models are provided and used in the European Union. 

The Act defines an AI system as “a machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that influence physical or virtual environments.” This definition could include tools such as ChatGPT, smart assistants, virtual travel booking agents, marketing chatbots, autonomous vehicles and other similar systems and tools that use AI. 

Prohibited AI systems 

To achieve its goals, the EU AI Act classifies AI systems and models into four risk-based categories. The first risk category is “unacceptable risk”, which is applied to particularly harmful uses of AI that violate fundamental rights. As such, the following unacceptable risk AI systems are prohibited to be used in the EU market: 

  1. Social credit monitoring systems; 
  2. Emotion-recognition systems at work or education; 
  3. AI Used to exploit people’s vulnerabilities, such as their ages or disabilities; 
  4. Behavioral manipulation and circumvention of free will; 
  5. Untargeted scraping of facial images for facial recognition; 
  6. Biometric categorization systems that use certain sensitive characteristics; 
  7. Specific predictive policing applications; 
  8. Law enforcement use of real-time biometric identification in public, apart from in limited authorized situations. 

Developers, providers, importers and distributors of AI systems that are categorized as “prohibited systems” should be aware of the fact that these systems are prohibited from entering and being used in the EU market. 

High-risk systems 

The second classification of AI systems by the EU AI Act is “high risk AI systems.” These are systems that pose a significant risk to health, safety or fundamental rights and includes systems from the following categories: 

  1. Medical devices; 
  2. Vehicles; 
  3. Emotion-recognition systems (outside of work or education); 
  4. Law enforcement. 

Providers of high risk AI systems are required to ensure that their systems follow the below requirements: 

  1. Data quality; 
  2. Documentation and traceability; 
  3. Transparency; 
  4. Human oversight; 
  5. Accuracy, cybersecurity and robustness; 
  6. Demonstrated compliance with conformity assessments; 
  7. Registration in a public EU database (if used by public authorities for purposes outside of law enforcement or migration). 

Low-risk AI systems

The third classification provided for AI systems is “low risk AI systems”. These are other AI systems that present minimal or no risk for EU citizens’ rights or safety. Even though these systems provide low or no risk, there are still requirements that providers must meet. The EU AI Act requires providers to ensure that these systems are designed and developed to guarantee individuals users are aware of the fact that they are interacting with an AI system. The EU AI Act also allows providers to voluntarily commit to codes of conduct developed by the industry. 

General-purpose AI models 

The final classification provided to AI systems and models is “general purpose of AI models”, which are models that display significant generality and are capable of competently performing a wide range of distinct tasks. These models can be integrated into a variety of different systems or applications. 

Providers of general-purpose AI models are required to: 

  1. Perform fundamental rights impact assessments and conformity assessments; 
  2. Implement risk management and quality management systems to assess and mitigate systems risks continually; 
  3. Inform individuals when they interact with AI and label AI models; 
  4. Test and monitor for accuracy, robustness and cybersecurity. 

Significant provisions 

The EU AI Act also provides various significant provisions to ensure that the use of AI meets the values of the European market. For example, AI providers are required to be transparent about the fact that individuals are interacting with a machine. AI providers must also perform conformity assessments and fundamental rights impact assessments. In addition, generative AI must label generated content to ensure that it is detectable as artificially generated or manipulated content. 

Enforcement 

The EU AI Act will be enforced by the AI Office, which is housed within the European Commission. Individual EU countries will also be required to establish authorities that will establish rules on penalties and other enforcement measures. The AI Act provides the following fine and penalty structure: 

  1. Prohibited AI violations: up to 7% of global annual turnover or 35 million euros; 
  2. Most other violations: up to 3% of global annual turnover or 15 million euros; 
  3. Supplying incorrect information to authorities: up to 1% of global annual turnover or 7.5 million euros. 

The EU AI Act will take effect 20 days after its publication in the Official Journal and several staggered deadlines in place will determine when certain provisions will take effect. Providers, developers and distributors of AI systems and models should ensure that they adapt their AI and perform the required compliance obligations prior to the Act going into effect.

Share

Manage Unlimited WordPress Sites from One Dashboard!

  • Privacy-first, Open Source, Self-hosted
  • Easy Client Management
  • 15+ & 30 + Premium Add-ons
  • Bulk Plugins & Themes Management
Get Pro Now

Categories

Recent Posts

Search MainWP.com

[searchwp_form id="1"]