The EU AI Act: What you need to know

  • Insight
  • February 26, 2024
Keith Power

Keith Power

Partner, PwC Ireland (Republic of)

The EU AI Act is the world’s first comprehensive legal framework for artificial intelligence (AI). It aims to address the risks and opportunities of AI for health, safety, fundamental rights, democracy, rule of law and the environment in the EU. It also seeks to foster innovation, growth and competitiveness in the EU’s internal market for AI.

Given companies’ increased desire to use AI to drive efficiencies, particularly through Generative AI (GenAI), businesses must lay the foundations to implement AI in a responsible and controlled manner—but it’s hard to know where to start.

Businesses need to be aware of their AI exposure within their organisation to properly manage the associated risks. To manage these risks and comply with the EU AI Act, appropriate AI governance is needed.

In this insight, we explain who the EU AI Act applies to and what you need to be aware of to manage the risks associated with AI.

Close of up a Business professionals side profile

Who does the AI Act apply to?

The EU AI Act applies to businesses who create or use AI systems. It also affects those who sell, distribute or import AI systems. It applies to entities within the EU and also developers, deployers, importers and distributors of AI systems outside the EU if their systems’ output occurs within the EU.

The AI Act adopts a risk-based approach and classifies AI systems into risk categories based on their potential use, as well as the potential impact on individuals and society.

Certain obligations are also expected for providers of general-purpose AI models, including large GenAI models such as ChatGPT and Bing Chat.

Providers of free and open-source models are exempt from most of these obligations. However, this exemption does not cover obligations for providers of general-purpose AI models with systemic risks. For example, if you use a Gen AI model as part of a process that could ultimately lead to outputs that were deemed high-risk, the use case would be treated as high-risk.

Obligations do not apply to research, development and prototyping activities preceding release on the market—for example, developing use cases that have not entered into production. It also does not apply to AI systems that are exclusively for military, defence or national security purposes regardless of the type of entity carrying out those activities.

What are the risk categories?

To introduce a proportionate and effective set of binding rules for AI systems, a risk-based approach has been defined by the European Commission. There are four risk categories:

  • Unacceptable risk
  • High risk
  • Limited risk
  • Minimal risk

These are determined by the intended purpose of the AI system, the commensurate risk of harm to the fundamental rights of people, the severity of possible harm and the probability of occurrence. Specific transparency requirements and systemic risks are specifically called out in the Act. Potential uses that fall under each of the four risk categories are outlined below.

Unacceptable risk

  • Social scoring for public and private purposes.
  • Exploitation of vulnerabilities of persons and the use of subliminal techniques.
  • Real-time remote biometric identification in publicly accessible spaces by law enforcement, subject to narrow exceptions.
  • Biometric categorisation of natural persons based on biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs or sexual orientation. Filtering of datasets based on biometric data in the area of law enforcement will still be possible.
  • Individual predictive policing.
  • Emotion recognition in the workplace and education institutions unless for medical or safety reasons (i.e. monitoring the fatigue levels of a pilot).
  • Untargeted scraping of the internet or CCTV for facial images to build or expand databases.

High risk

  • Essential private and public services (e.g. financial institutions using credit scoring models that could deny citizens the opportunity to obtain a loan).
  • Employment, management of workers and access to self-employment (e.g. CV-sorting software for recruitment procedures).
  • Critical infrastructures (e.g. transport) that could put the life and health of citizens at risk.
  • Educational or vocational training that may determine access to education and the professional course of someone’s life (e.g. the scoring of exams).
  • Safety components of products (e.g. AI applications in robot-assisted surgery).
  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence).
  • Systems intended to be used to make or substantially influence decisions on the eligibility of natural persons for health and life insurance..
  • Migration, asylum and border control management (e.g. verification of authenticity of travel documents)..
  • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

Limited risk

Compliance obligations are lighter, focusing on transparency. Users must be informed when dealing with an AI system unless the outputs are obviously generated by AI at face value. Examples of uses are as follows:

  • The user should be aware that they are interacting with a chatbot.
  • Use of deep-fakes that can easily be determined as a deep-fake.

Minimal risk

AI systems not falling into the three categories mentioned above are not subject to compliance under the EU AI Act. The primary focus for technology providers will be the high-risk and limited risk categories. All other AI systems can be developed and used subject to existing legislation without any additional legal obligations. An example of a minimal risk is:

  • Use of AI within video games.

Other risks that must be considered include:

  • Specific transparency risk: Specific transparency requirements are imposed for certain AI systems—for example, where there is a clear risk of manipulation (such as with the use of chatbots). Users should be aware that they are interacting with a chatbot.
  • Systemic risks: Systemic risks that could arise from general-purpose AI models, including large GenAI models. These can be used for a variety of tasks and are becoming the basis for many AI systems in the EU. Some of these models could carry systemic risks if they are highly capable or widely used. For example, powerful models could cause serious accidents or be misused for far-reaching cyberattacks. Many individuals could be affected if a model propagates harmful biases across many applications.

What are the obligations for providers of high-risk AI systems?

Before placing a high-risk AI system on the EU market or otherwise putting it into service, providers must subject it to a conformity assessment. This will allow them to demonstrate that their system complies with the mandatory requirements for trustworthy AI (e.g. data quality, documentation and traceability, transparency, human oversight, accuracy, cybersecurity and robustness). This assessment has to be repeated if the system or its purpose are substantially modified.

Providers of high-risk AI systems will also have to implement sufficient AI governance, particularly around quality control and risk management of the AI system, to ensure compliance with the new requirements and minimise risks for users and affected persons—even after a product is placed on the market.

High-risk AI systems that are deployed by public authorities or entities acting on their behalf will have to be registered in a public EU database.

When will the AI Act be fully applicable?

Following its adoption by the European Parliament and European Council, the EU AI Act will come into force 20 days after its publication in the Official Journal. It will become fully applicable 24 months after entry into force, with a staggered approach as follows:

  • Six months: prohibited systems will need to be phased out no later than six months after the legislation comes into force.
  • 12 months: obligations for general-purpose AI governance will apply.
  • 24 months: all rules of the AI Act apply, including obligations for high-risk systems defined in Annex III in the Act (list of high-risk use cases).
  • 36 months: obligations for high-risk systems defined in Annex II (list of EU harmonisation legislation) apply.

What are the penalties for infringement?

Penalties for non-compliance are severe and will be enforced by the designated AI authority within a given EU member state.

The Act sets out thresholds as follows:

  • Up to €35m or 7% of the total worldwide annual turnover of the preceding financial year (whichever is higher) for infringements on prohibited practices or non-compliance related to requirements on data.
  • Up to €15m or 3% of the total worldwide annual turnover of the preceding financial year for non-compliance with any of the other requirements or obligations of the regulation, including infringement of the rules on general-purpose AI models.
  • Up to €7.5m or 1.5% of the total worldwide annual turnover of the preceding financial year for the supply of incorrect, incomplete or misleading information to notified bodies and competent national authorities in reply to a request.
  • For each category of infringement, the threshold will be the lower of the two amounts for SMEs and the higher of the two for other companies.

To harmonise national rules and practices in setting administrative fines, the Commission will draw up guidelines with advice from the EU AI Board.

Key actions Businesses can take today

 

1. Create an AI exposure register

To evaluate the risks associated with the use of AI in your organisation, a baseline of your existing AI exposure is needed. Exposure can include applications and systems that are native AI systems, existing systems that have had updates and now contain AI, and the use of AI by third-party providers that provide services including Software-as-a-Service (SaaS). An AI exposure register will allow you to assess your exposure to all AI-related risks.

2. Risk-assess each of the use cases you have identified in your AI Exposure register in line with the EU AI Act Risk Assessment Framework

Apply the EU AI Act risk framework to the AI use cases identified in your AI exposure register. Take action to mitigate identified risks and ensure governance and appropriate controls are in place to manage these risks.

3. Establish appropriate AI governance structures to manage the risk of AI responsibly

In line with the EU AI Act, appropriate AI governance and AI systems risk management must be implemented. AI governance is a shared responsibility across the organisation and requires a defined operating environment that aligns to existing enterprise governance structures. This will ensure that AI governance is embedded within the organisation.

4. Implement an upskilling programme and roll out awareness sessions to equip stakeholders for responsible use and oversight

Raising awareness of the capabilities and limitations of AI will ensure that your organisation reaps the benefits.

 

We are here to help 

The EU AI Act is a comprehensive and intricate piece of legislation. Our Trust in AI team is ready to guide you in adopting AI practices that align with the EU AI Act. Whether it’s compiling your AI exposure register or implementing robust AI governance, we can help you effectively manage the risks associated with AI. In doing so, you can fully embrace the benefits AI brings. Reach out to our multidisciplinary team of experts to see how we can help.

Contact us

Keith Power

Partner, PwC Ireland (Republic of)

Tel: +353 86 824 6993

Moira Cronin

Partner, PwC Ireland (Republic of)

Tel: +353 86 377 1587

James Scott

Director, PwC Ireland (Republic of)

Tel: +353 87 144 1818

Neil Redmond

Director, PwC Ireland (Republic of)

Tel: +353 87 970 7107

Follow PwC Ireland