Today I listened in to part of the AI Act Conference hosted by Uppsala University. Below are my notes from the keynote.
AI Act Conference keynote: Navigating the AI Act
- Key building blocks and challenges ahead
- Katja de Vries, Senior Lecturer, Department of Law, Uppsala University
The ‘Vortex’ of the AI Act (AIA)
Why does the AI act feel so overwhelming?
The AI Act is often compared to a vortex—a disorienting, dizzying force pulling everything into its orbit. This impression arises from:
- Interaction with other regulatory frameworks: The AI Act doesn’t stand alone; its interplay with other regulations can leave even experts feeling lost.
- A complex foundation: The 144-page document is merely the starting point, outlining what must be done and what structures need to exist for effective enforcement.
- A challenging timeline: Multiple deadlines and layers of compliance make it difficult to keep pace.
Together, these factors contribute to the sense of vertigo for regulators, businesses, and experts alike.
Simplifying the vertigo
The AI Act’s structure involves contributions from various levels—EU, member states, and other authorities—each responsible for implementing, enforcing, and elaborating the Act.
National competent authorities: A struggling timeline
- Deadline #1: By August 2, 2024, member states should have designated national competent authorities.
- Deadline #2: By November 2, 2024, authorities to protect fundamental rights should have been established.
This is not going extremely well. While one member state is reportedly “clear”", many remain uncertain or in the planning stage.
The somewhat confusing basics
1. It’s about product safety… But also fundamental rights
The AI Act aligns with the EU’s product safety regulations (e.g., CE markings for compliance). However, it also addresses risks to fundamental rights, which are harder to quantify and manage than traditional safety concerns.
The legislative framework is a private-public partnership. Political actors don’t always have a good sense of the technical details involved. This means democratic legislator bring intended guidelines and private actors massages that into technical standards. Industry basically decides how things should work as they have the technical knowledge. Maybe there is something democratically strange about this - industry effectively deciding content of the law. But as long as we are taling about toys and elevators, there is not a broader disturbance on if it is probelmatic. Risks ‘health, safety and fundamental rights’.
2. The risk lies in the application… But sometimes it doesn’t.
While most AI risks emerge from their application (not the technology itself), some uses inherently pose greater risks to privacy, discrimination, and other fundamental rights. With AIA there is a kind of mixture of classical product safety regulation (risks to health and safety), but here we have AI systems that also cause risk to fundamental rights. That is not as eay to quantify. Normal product safety regulation, EU-style, is directed at the manufacturer: if you make this stuffed toy, don’t put so many chemicals etc… but here we have two main actors:
- Provider vs. Deployer: The Act introduces roles like “providers” (manufacturers) and “deployers” (professional users making a service available) to handle these complexities.
- Example: A university using AI for grading must conduct a fundamental rights impact assessment.
Risk categories and their implications
Old risk pyramid (2021)
- Prohibited AI Applications: Extreme risks like social scoring or predictive policing.
- High-Risk AI Applications: Fields like biometrics, law enforcement, and education, with stringent requirements.
- Limited Risk Applications: Transparency obligations.
- No or Minimal Risk Applications: Most AI systems fall here, with no additional regulation.
New risk pyramid (2024, Post-ChatGPT)
Generative AI systems like ChatGPT have blurred application boundaries, leading to adjusted frameworks with additional transparency and governance requirements for General Purpose AI (GPAI).
High-risk systems have many rules and regulations as part of AIA, but low-risk systems have almost none.
What is high risk?
Applications listed in Annex III, such as:
- Biometrics.
- Critical infrastructure management.
- Education and vocational training.
- Employment and workforce management.
- Law enforcement.
- Migration and border control.
High-risk systems face obligations for:
- Risk management.
- Data governance.
- Transparency.
- Human oversight.
Lobbying and complexity
From 2021 to 2024, intense lobbying shaped the AIA’s details.
- Derogation in Article 6(3): Not all Annex III applications are automatically high risk. If AI is used narrowly or procedurally, it might not qualify as high risk.
- Counter-lobbying: Profiling, however, remains high risk, leading to layered exceptions and conditions.
Timeline: key dates
- August 1, 2024: AI Act enters into force.
- February 2, 2025: Prohibitions on unacceptable risk AI become applicable.
- May 2025: AI Office published GPAI code of practice.
- August 2, 2025: Obligations for General Purpose AI apply.
- August 2, 2026: All rules of AI act become applicable, including obligations for high risk systems.
- August 2, 2027: Obligations for all other high risk systms become applicable (Annex I).
- 2030: Obligations extend to AI in large-scale information technology EU systems, like the Schengen Information System.
Harmonized standards for high-risk AI must be developed by April 2025, involving public-private cooperation.
Challenges and future directions
Flexibility vs. democratic deliberation
The AI Act allows for rapid adjustments to Annex III and systemic risks. While this flexibility supports fast-moving AI developments, it raises concerns about adequate democratic oversight.
Sustainability and the Green Agenda
Efforts to integrate AI into the EU’s sustainability goals remain limited but are evolving. New provisions consider environmental impacts, though the primary focus remains digital innovation and trustworthiness.
Q&A
1. Can the AI Act keep up with rapid AI developments?
The Act’s flexibility enables swift updates, but questions of democratic legitimacy persist.
2. How does the AI Act align with broader EU goals?
The Act supports human rights, health, and safety but has only started addressing sustainability concerns.
3. Is Article 6(3) exhaustive, or is it more like examples?
It provides examples, leaving room for interpretation and further arguments about high-risk categorization.
The AI Act embodies both ambition and complexity, navigating a rapidly evolving technological landscape while attempting to balance innovation, safety, and fundamental human rights.