Liza Shulyayeva

[AI Act Conference 2024 - Notes] Legislating Technology: Addressing the law's internal dilemma



These are my notes from one of the AI Act Conference presentations from December 10, 2024.

This work examines the general regulation of technology and is a work in progress.

Law as a black box

Inspiration comes from interactions with systems engineers and data scientists. To lawyers, these systems are a black box, but to engineers, the law itself is a black box. In conversations with engineers, a noteworthy concern emerged: an engineer expressed worry that the law might be updated too quickly. Traditionally, the complaint is that law is too slow to react, but here, the opposite concern arises—too rapid adaptation might lead to issues with predictability and foreseeability.

Another engineer highlighted that frequent legal changes would create challenges, despite being generally positive about the AI Act. While they can manage the law’s “black box” nature, frequent changes could undermine stability.

Regulating technology: Legislative policy

Inherent dilemmas within the law

Slower regulation

Faster regulation:

Characteristics of emerging technologies like AI:

Legislative models

Four ideal-type legislative policy models:

The AI Act attempts to bridge legislative gaps:

AI systems are regulated based on their purpose and context. A system might be banned in one context and permitted in another, reflecting the importance of use-case-specific assessments.

AI Act governance mechanisms

The AI Act employs a mix of governance models, creating a fragmented framework:

Consequences

The AI Act’s patchwork regulatory models may lead to:

Preliminary recommendations

A clear, default regulatory governance framework is needed:

The Act should adopt an administrative legislative policy model with a designated “default regulator”:


Q&A

Question: Currently, guardrails protect stakeholders more than users. What are the implications of conformity assessments?

Answer: Guardrails are primarily designed by companies, though fundamental rights protections aim to safeguard individuals. Conformity assessments are complex due to the subjective nature of risk. Modern legislation is increasingly risk-based, but perceptions of risk vary and evolve, adding another layer of regulatory difficulty.


Question: Have you looked to other disruptive technologies like genomics for inspiration?

Answer: Not yet, but it is a valuable avenue. AI differs significantly from other technologies because it is self-learning and evolves continuously. The AI on the market today will not be the same as next month. Additionally, AI’s potential for manipulation poses unique challenges. Lessons from dual-use technologies may be applicable but must be adapted to AI’s specificities.


Question: How does a single EU AI Agency align with the AI Board or AI Office?

Answer: This requires mapping current institutions to the concept of a centralized AI Agency. The roles of harmonized standards and institutional powers must be evaluated. This analysis is ongoing and may feature in future research.


Question: Should standardization bodies like CEN-CENELEC address fundamental rights concerns?

Answer: Yes and no. Standardization processes lack the openness of traditional lawmaking. Access to standards is costly, and their creation by private entities introduces bias. However, technical standards are essential for making vague laws actionable. The process could benefit from greater transparency and inclusivity to enhance democratic legitimacy.


Question: Should there be a dedicated AI standardization body with a different approach?

Answer: There is merit in this idea. AI presents unique challenges, warranting new structures. A dedicated body could adopt more democratic representation, provide free and accessible standards, and involve diverse stakeholders, including ethicists and philosophers, to address the complex societal implications of AI.


© 2025 · Liza Shulyayeva · Top · RSS · privacy policy