AI Act Set to Come into Force on 1 August 2024
The countdown to compliance with the Artificial Intelligence Act (“AI Act”) has started. The AI Act was signed into law on 13 June 2024 and is currently being prepared for publication in the EU Official Journal with publication expected on 12 July 2024 and entry into force on 1 August 2024.
- Published
- in Industry Updates
Background
The AI Act creates a legal framework for the purpose of achieving human-centric AI and protects health, safety and fundamental rights against the harmful effects of AI while simultaneously promoting innovation.
Who does the AI Act apply to?
The AI Act will apply to all stakeholders on the AI value chain such as AI providers including providers of general-purpose AI (“GPAI”), users, importers, distributors, manufacturers and authorised representatives. However, there are exemptions for the research and development of AI systems used for scientific research, military, defence or international cooperation where there are sufficient fundamental rights safeguards in place.
Extra-territorial Scope
The AI Act has extra-territorial scope, meaning that it will impact organisations both inside and outside the EU. The AI Act will apply to entities which put AI on, or into service on, the EU market and / or use AI outputs in the EU. Providers of AI systems and general AI models which generate or manipulate text (“GPAI”) located outside the EU must appoint a natural or legal person located in the EU to act as their authorised representative.
Risk Categories
The AI Act takes a risk-based approach to the regulation of AI systems. This means that the stringency of the rules applicable to an AI system will depend on the severity of harm it poses to fundamental rights and the likelihood of that harm materialising. The risk categories are:
- Prohibited: e.g. social scoring, cognitive behavioural manipulation, biometric categorisation.
- High: e.g. use in employment, credit decisions, health/life insurance risk assessment.
- GPAI: e.g. large language models such as ChatGPT.
- Limited: e.g. chatbots.
- Minimal: e.g. spam filters or video games not falling within the above.
High Risk Providers
A variety of obligations apply to high-risk AI systems providers. Key obligations relate to:
(a) risk management systems;
(b) data governance;
(c) technical documentation;
(d) record keeping;
(e) transparency and provision of information to deployers;
(f) human oversight;
(g) accuracy;
(h) robustness and cybersecurity;
(i) quality management system;
(j) documentation keeping;
(k) automatically generated logs;
(l) cooperation with competent authorities;
(m) displaying the CE Mark; and
(n) registering with the EU database.
GPAI Providers
GPAI providers are subject to obligations including drawing up technical documentation, copyright policy and publishing data on content used for training. However, GPAI providers may adhere to voluntary codes of practice to be published by the EU AI Office (see further below) to demonstrate compliance.
GPAI systems which carry systemic risk must prepare for model evaluation, ongoing assessment and mitigation of risks, notifying the European Commission (“Commission”) of such risk and put in place incident reporting and cybersecurity measures.
Users’ Obligations
AI users will have fewer obligations to comply with than AI providers. However, all organisations which use AI must ensure that their staff have some level of AI literacy.
Users of high-risk AI must ensure they put in place technical and organisational measures, human oversight, monitoring, ensure relevant and representative input data, keep logs and conduct data protection impact assessments. For example, if AI is used in an employment context, employees must be informed of this.
AI systems which create deep fakes, or public interest publications or which involve emotion recognition or biometric categorisation are subject to transparency rules and users must disclose that such systems have been used and that the content created has been artificially generated or manipulated.
Enforcement
There will be a range of bodies and mechanisms put in place to enforce the AI Act.
At EU level, the EU AI Office will regulate the implementation of the AI Act across the Member States. The AI Board will comprise of one representative per Member State and the European Data Protection Supervisor, and the EU AI Office will observe their meetings but cannot vote. The AI Board will oversee the application of the AI Act and act as an advisory body to the Commission. The Commission will also have powers to implement delegated legislation under the AI Act.
At national level, national supervisory authorities will have competency to enforce the AI Act. A national public authority with powers to supervise or enforce fundamental rights must be appointed.
Fines
The AI Act provides for significant fines for infringements:
- Breach of provisions relating to prohibited AI will see fines of up to the greater of €35 million or 7% annual global turnover.
- Fines for breach of other provisions will see fines of up to the greater of €15 million or 3% of annual global turnover.
Fines for infringement of the AI Act by SMEs will take into account the interests of the SME including their economic viability, and where fines are applied to SMEs, it will be up to the lower of percentages or the amounts referred to above.
SME Supports
There are special provisions for SMEs to help boost innovation. For example:
- EU-based SMEs will have priority access to the AI regulatory sandboxes and access will be free of charge;
- Member States must provide training tailored to SMEs on the AI Act;
- national authorities will provide information and standardised templates for SMEs for documentation required under the AI Act; and
- SME providers of high-risk AI systems can provide simplified technical documentation.
Timing
The provisions of the AI Act are expected to begin to apply from 1 August 2026 with certain exceptions:
- 1 November 2024 – National public authority protecting fundamental rights must be identified and notified to the Commission.
- 1 February 2025 – Provisions on scope, definitions and prohibited AI systems apply.
- 1 August 2025 – Provisions on GPAI, penalties and EU governance apply.
- 1 August 2027 – Provisions on safety components / specific products considered high risk per Annex I apply.
Future Developments
The AI Act is part of the Commission’s three-pronged legal approach to regulating AI. In addition to the AI Act, the proposed AI Liability Directive1 will set down procedural rules for civil claims concerning AI and the proposed Product Liability Directive will address harm caused by defective AI systems and provide for compensation2.
What to Do Now
Navigating compliance with the AI Act will present significant strategic and governance challenges which must be met within the relevant timeframes. To ensure compliance and mitigate risks, organisations should proactively:
- identify AI used by your business and which risk category applies;
- put in place an AI governance framework appropriate to your use and the relevant risk category including an AI policy, staff AI training and vendor AI due diligence; and
- communicate with stakeholders.
How the Maples Group Can Help
If you wish to receive a copy of our guide to the AI Act, or if you would like further information, please reach out to your usual Maples Group contact or any of the persons listed below.
1 Proposal for a Directive of the European Parliament and of the European Council on adapting non-contractual civil liability rules to artificial intelligence (available here). Progress on this Directive has stalled.
2 Proposal for a Directive of the European Parliament and of the European Council on liability for defective products (available here). This Directive was adopted by the European Council on 12 March 2024 and will now move to the European Council for approval.