These are my notes from one of the AI Act Conference presentations from December 10, 2024.
- Addressing the law’s internal dilemma
- Stanley Greenstein, Associate Professor in Law and IT, Faculty of Law, Stockholm University.
This work examines the general regulation of technology and is a work in progress.
Law as a black box
Inspiration comes from interactions with systems engineers and data scientists. To lawyers, these systems are a black box, but to engineers, the law itself is a black box. In conversations with engineers, a noteworthy concern emerged: an engineer expressed worry that the law might be updated too quickly. Traditionally, the complaint is that law is too slow to react, but here, the opposite concern arises—too rapid adaptation might lead to issues with predictability and foreseeability.
Another engineer highlighted that frequent legal changes would create challenges, despite being generally positive about the AI Act. While they can manage the law’s “black box” nature, frequent changes could undermine stability.
Regulating technology: Legislative policy
Inherent dilemmas within the law
Slower regulation
- Legal predictability
- Time for deliberation
- Law based on past experiences
- Diminished protective function but risks growing disconnect
Faster regulation:
- Regulatory flexibility
- Diminished predictability
Characteristics of emerging technologies like AI:
- Fast-paced development
- Self-learning, complexity, speed, and unpredictable trajectories
Legislative models
Four ideal-type legislative policy models:
- Statutory
- Administrative
- Judicial
- Outsourced
The AI Act attempts to bridge legislative gaps:
- Governance mechanisms applied to the AI Act
- Promotes a mix of strategic legislative policy models, causing regulatory uncertainty
AI systems are regulated based on their purpose and context. A system might be banned in one context and permitted in another, reflecting the importance of use-case-specific assessments.
AI Act governance mechanisms
The AI Act employs a mix of governance models, creating a fragmented framework:
- Delegated acts: Core decisions remain within the EU legislative framework.
- Harmonized standards: Standards created by public agencies and standardization bodies. These technical documents guide compliance but are costly, produced in closed processes, and have significant legal effects.
- Common specifications: Activate if harmonized standards are inadequate or delayed.
- Conformity assessments: Companies often self-assess compliance, outsourcing regulatory functions to private entities.
- Certificates: Formalized compliance markers.
Consequences
The AI Act’s patchwork regulatory models may lead to:
- Legal uncertainty and inconsistency
- Challenges in understanding and applying the Act
- Divergent development directions for legal interpretations
- Limited political accountability due to private entities making key decisions
Preliminary recommendations
A clear, default regulatory governance framework is needed:
- The AI Act should identify a primary legislative policy model to structure regulatory authority.
- Secondary models could be used in exceptional, justified cases.
The Act should adopt an administrative legislative policy model with a designated “default regulator”:
- A centralized EU AI Agency with broad authority for AI rule-making
- The agency should operate within the legal framework of the AI Act, creating specific rules to achieve its objectives
Q&A
Question: Currently, guardrails protect stakeholders more than users. What are the implications of conformity assessments?
Answer: Guardrails are primarily designed by companies, though fundamental rights protections aim to safeguard individuals. Conformity assessments are complex due to the subjective nature of risk. Modern legislation is increasingly risk-based, but perceptions of risk vary and evolve, adding another layer of regulatory difficulty.
Question: Have you looked to other disruptive technologies like genomics for inspiration?
Answer: Not yet, but it is a valuable avenue. AI differs significantly from other technologies because it is self-learning and evolves continuously. The AI on the market today will not be the same as next month. Additionally, AI’s potential for manipulation poses unique challenges. Lessons from dual-use technologies may be applicable but must be adapted to AI’s specificities.
Question: How does a single EU AI Agency align with the AI Board or AI Office?
Answer: This requires mapping current institutions to the concept of a centralized AI Agency. The roles of harmonized standards and institutional powers must be evaluated. This analysis is ongoing and may feature in future research.
Question: Should standardization bodies like CEN-CENELEC address fundamental rights concerns?
Answer: Yes and no. Standardization processes lack the openness of traditional lawmaking. Access to standards is costly, and their creation by private entities introduces bias. However, technical standards are essential for making vague laws actionable. The process could benefit from greater transparency and inclusivity to enhance democratic legitimacy.
Question: Should there be a dedicated AI standardization body with a different approach?
Answer: There is merit in this idea. AI presents unique challenges, warranting new structures. A dedicated body could adopt more democratic representation, provide free and accessible standards, and involve diverse stakeholders, including ethicists and philosophers, to address the complex societal implications of AI.