Liza Shulyayeva

AI Act Conference 2024 Keynote notes: Navigating the AI Act



Today I listened in to part of the AI Act Conference hosted by Uppsala University. Below are my notes from the keynote.

AI Act Conference keynote: Navigating the AI Act


The ‘Vortex’ of the AI Act (AIA)

Why does the AI act feel so overwhelming?

The AI Act is often compared to a vortex—a disorienting, dizzying force pulling everything into its orbit. This impression arises from:

Together, these factors contribute to the sense of vertigo for regulators, businesses, and experts alike.


Simplifying the vertigo

The AI Act’s structure involves contributions from various levels—EU, member states, and other authorities—each responsible for implementing, enforcing, and elaborating the Act.

National competent authorities: A struggling timeline

This is not going extremely well. While one member state is reportedly “clear”", many remain uncertain or in the planning stage.


The somewhat confusing basics

1. It’s about product safety… But also fundamental rights

The AI Act aligns with the EU’s product safety regulations (e.g., CE markings for compliance). However, it also addresses risks to fundamental rights, which are harder to quantify and manage than traditional safety concerns.

The legislative framework is a private-public partnership. Political actors don’t always have a good sense of the technical details involved. This means democratic legislator bring intended guidelines and private actors massages that into technical standards. Industry basically decides how things should work as they have the technical knowledge. Maybe there is something democratically strange about this - industry effectively deciding content of the law. But as long as we are taling about toys and elevators, there is not a broader disturbance on if it is probelmatic. Risks ‘health, safety and fundamental rights’.

2. The risk lies in the application… But sometimes it doesn’t.

While most AI risks emerge from their application (not the technology itself), some uses inherently pose greater risks to privacy, discrimination, and other fundamental rights. With AIA there is a kind of mixture of classical product safety regulation (risks to health and safety), but here we have AI systems that also cause risk to fundamental rights. That is not as eay to quantify. Normal product safety regulation, EU-style, is directed at the manufacturer: if you make this stuffed toy, don’t put so many chemicals etc… but here we have two main actors:


Risk categories and their implications

Old risk pyramid (2021)

  1. Prohibited AI Applications: Extreme risks like social scoring or predictive policing.
  2. High-Risk AI Applications: Fields like biometrics, law enforcement, and education, with stringent requirements.
  3. Limited Risk Applications: Transparency obligations.
  4. No or Minimal Risk Applications: Most AI systems fall here, with no additional regulation.

New risk pyramid (2024, Post-ChatGPT)

Generative AI systems like ChatGPT have blurred application boundaries, leading to adjusted frameworks with additional transparency and governance requirements for General Purpose AI (GPAI).

High-risk systems have many rules and regulations as part of AIA, but low-risk systems have almost none.


What is high risk?

Applications listed in Annex III, such as:

High-risk systems face obligations for:


Lobbying and complexity

From 2021 to 2024, intense lobbying shaped the AIA’s details.


Timeline: key dates

Harmonized standards for high-risk AI must be developed by April 2025, involving public-private cooperation.


Challenges and future directions

Flexibility vs. democratic deliberation

The AI Act allows for rapid adjustments to Annex III and systemic risks. While this flexibility supports fast-moving AI developments, it raises concerns about adequate democratic oversight.

Sustainability and the Green Agenda

Efforts to integrate AI into the EU’s sustainability goals remain limited but are evolving. New provisions consider environmental impacts, though the primary focus remains digital innovation and trustworthiness.


Q&A

1. Can the AI Act keep up with rapid AI developments?
The Act’s flexibility enables swift updates, but questions of democratic legitimacy persist.

2. How does the AI Act align with broader EU goals?
The Act supports human rights, health, and safety but has only started addressing sustainability concerns.

3. Is Article 6(3) exhaustive, or is it more like examples?
It provides examples, leaving room for interpretation and further arguments about high-risk categorization.


The AI Act embodies both ambition and complexity, navigating a rapidly evolving technological landscape while attempting to balance innovation, safety, and fundamental human rights.

dev
© 2025 · Liza Shulyayeva · Top · RSS · privacy policy