And how Luxembourg can avoid the drawbacks throughout its implementation.
The AI Act adopted by the EU
The AI Act, recently adopted by the European Union, aims to regulate artificial intelligence through a risk-based approach. This approach is intended to balance innovation with the protection of fundamental rights and public safety. However, the paper "Truly Risk-Based Regulation of Artificial Intelligence - How to Implement the EU's AI Act" by Martin Ebers dated on 19 June 2024, critiques the AI Act for not adhering to a genuinely risk-based methodology, resulting in overregulation and other issues.
The AI Act will be published in the Official Journal of the European Union in June-July 2024, formally notifying the new law. It will enter into force 20 days after publication, initiating several milestones:
Key criticisms
In his paper, Prof. Ebers contends the AI Act lacks a proper risk-benefit analysis, relies on limited empirical evidence for its risk categories, uses overly broad definitions of AI, and imposes double regulatory burdens due to its horizontal approach. This could result in deterministic software facing the same strict requirements as less predictable machine learning systems, even if they pose lower risks.
Lack of risk-benefit analysis
The AI Act is largely concerned with preventing dangers and risks to health, safety, and basic rights, without sufficiently accounting for the potential positive impacts and advantages of AI systems. By failing to balance risks and rewards, it fails to establish a suitable foundation for a proportionate regulatory regime that maximises the common good. Even massive individual harms can be discounted without considering the communal advantages.
For example, article 5 prohibits certain AI practices deemed an unacceptable risk, such as AI systems that deploy "subliminal techniques beyond a person's consciousness in order to materially distort a person's behaviour" in a manner that causes physical or psychological harm. However, the terms "subliminal techniques", "materially distort" and the threshold for harm are not clearly defined.
Limited reliance on empirical evidence
The Act's design and risk categories are criticized for lacking a foundation in empirical evidence. The criteria for high-risk AI systems are often not justified by practical evidence but are instead the result of political compromises.
Annex III lists the high-risk AI systems, focusing on specific sectors and use cases. However, there may be AI applications that pose significant risks to rights and safety but fall outside these pre-defined high-risk categories, thus being subject to minimal requirements. The AI Act does not provide a general clause or set of criteria to identify other high-risk applications.
Pre-defined, closed risk categories
The AI Act uses a top-down approach with pre-defined risk categories, which may lead to overregulation in certain areas while neglecting specific, case-by-case risk assessments.
Whether an AI system used in a specific sector for specific purposes poses a high risk to health, safety and/or fundamental rights, is not assessed for the concrete risk, but is pre-defined for typical cases in Annex III.
As a result, this top-down approach leads to two main problems:
Over-regulation where an AI system falls into one of the categories listed in Annex III, but in reality does not pose a significant risk of harm.
The list of typical high-risk AI systems may not be easy for the European Commission to keep up to date in a timely manner, given how rapidly AI technology is evolving.
Additionally, the focus on a pre-defined list of high-risk AI systems creates a sharp divide between this category and other lower-risk categories that are largely unregulated. This rigid distinction is not justified in cases where an AI system is used in a sensitive sector like healthcare but does not qualify as high-risk, yet still poses numerous risks.
Regulation of general-purpose AI (GPAI) models
The obligations for GPAI providers are inconsistent with the risk-based approach. The Act imposes broad requirements that are difficult to apply effectively to GPAI due to their versatile nature.
The analysis made in Prof. Martin Ebers' paper, provides strong arguments that the EU AI Act overregulates general purpose AI (GPAI) models in a way that contradicts the Act's intended risk-based approach:
GPAI model providers cannot foresee, assess and mitigate concrete downstream risks, since by definition GPAI models are characterized by their generality and ability to be integrated into a wide variety of applications. Yet the AI Act imposes extensive risk assessment and mitigation obligations on all GPAI providers.
Some documentation requirements for GPAI providers, like providing detailed descriptions of methods to detect biases, neglect that bias is context-specific and cannot always be anticipated by the upstream model provider.
The obligation for GPAI providers to disclose sensitive information to any downstream provider that "intends" to use the model is problematic, as intent can be faked. This information is also primarily relevant only for high-risk applications.
The criteria for classifying a GPAI model as posing "systemic risk", especially the computational threshold of 10^25 floating point operations (FLOPs), are arbitrary and not based on empirical evidence of harm. This threshold was set mainly for political reasons to advantage certain EU companies.
The specific obligations for systemic risk GPAI models to conduct evaluations, mitigate risks, report incidents, etc. are vague and provide little guidance, since systemic risks are by nature diffuse and hard to quantify compared to application-specific risks.
The preceding objections are likely exacerbated by the fact that the AI Act's definition of AI is viewed as overly broad, embracing a wide range of technologies that may not represent major hazards, resulting in excessive regulatory obligations.
Implementing the AI Act in Luxembourg to mitigate overregulation
It is still too early to predict how the risks of overregulation will be dealt with at the EU level. Once the regulation is published, there is minimal room for modifications, which would need repeating a lengthy legislative process. Of course, the judiciary could invalidate some provisions depending on the question raised during litigation, but this takes time and has the potential to render other provisions of the AI ACT ineffective.
In the meantime, Luxembourg has the option to implement the AI Act in a way that reduces the risk of overregulation. Luxembourg can build an atmosphere conducive to AI innovation while protecting public safety and basic rights by focusing on a balanced, evidence-based, and flexible approach. These modifications will not only correspond with the AI Act's intended intent but will also establish Luxembourg as a leader in AI regulation inside the European Union.
Conduct comprehensive risk-benefit analysis
Luxembourg should ensure that each AI application is assessed not only for its risks but also for its potential benefits. This would involve:
Developing a framework for risk-benefit analysis tailored to the Luxembourg context.
Encouraging stakeholders to document both the risks and potential positive impacts of their AI systems.
Balancing regulatory measures with the promotion of innovation and technological advancement.
Rapidity and agility in enacting policies
Luxembourg might prioritise empirical evidence in its risk assessment process.
This could materialise in establishing a dedicated agency or task force to gather and evaluate legal and factual data on AI dangers. This task force should be tasked with anticipating economic or regulatory changes so that Luxembourg legislators and the administration may transform data into legislative texts that will provide the country a competitive advantage. This task force could collaborate with academic and research institutions to advance evidence-based policy.
Adopting a flexible, case-by-case approach
To avoid the pitfalls of pre-defined, closed risk categories, Luxembourg may wish to implement a dynamic regulatory framework allowing for case-by-case risk assessments, utilize sandboxing environments for controlled testing and evaluation of AI innovations, and ensure that regulatory requirements are proportionate to the actual risks posed by specific AI applications.
Tailor regulations for general purpose AI (GPAI)
Given the unique challenges posed by GPAI, Luxembourg could develop specific guidelines that focus on transparency and accountability rather than blanket regulations, encourage GPAI providers to create comprehensive documentation to assist downstream users in understanding and mitigating risks, and ensure that GPAI regulations are aligned with international best practices and standards to maintain competitiveness.
Clarify and narrow the AI definition
To prevent overregulation due to an overly broad AI definition, Luxembourg administration should want to refine the definition of AI within its national regulatory framework to focus on high-risk technologies, clearly delineate between different types of AI applications and their respective risk levels, and regularly review and adjust the AI definition to reflect technological advancements and emerging risks.