EU AI Regulation

In April 2021, the European Commission brought forward their proposal for the Artificial Intelligence Act, the world’s first substantial set of “harmonised rules on artificial intelligence.” Subsequently, in December 2022, the Council of the European Union approved their common position with various amendments to the Commission’s proposal. More recently, the European Parliament approved their negotiating position, with even further amendments, triggering the final stages of the EU legislative process. Whilst the final terms of the Act are yet to be agreed, the EU are leading the way in setting out comprehensive legislation to regulate AI. This article will discuss the key takeaway points from the proposed AI Act, and the proposed AI Liability Directive.

Risk-based Regulation

The hallmark of the Act is the implementation of a risk-based categorisation of AI systems, each of which attracts a varying degree of rules and regulations which must be complied with:

Unacceptable Risk

At the highest level systems deemed “unacceptable” in the risk present to citizens’ fundamental rights. which will be prohibited throughout the EU. The Commission’s draft stipulates that prohibited “practices” include those which use subliminal techniques to distort behaviour or cause harm, exploit vulnerable groups, governmental social scoring and real-time biometric identification (with limited exceptions). 

 

The European Parliament went further to say that biometric systems should be prohibited entirely, not merely in “real-time.” Parliament also adopted more specific examples of unacceptable systems. These included the categorisation of people based on protected characteristics, the prediction or profiling of individuals by law enforcement, the indiscriminate scraping of biometric data, and the detection of emotional or expression to make judgments. 

High Risk

The Act sets out a range of high-risk systems, and includes systems which pose a risk of harm or adverse impact (a lower threshold than unacceptable risks) to the rights of citizens. High-risk systems will be subject to strict regulations including:

Parliament’s proposed amendments to these regulations suggest classifying high-risk systems as those which pose a “significant” risk to fundamental rights, a seemingly higher threshold than the one proposed by the Commission. Perhaps the most significant amendment is an added transparency obligation under the proposed Article 28(b). This proposal would apply to developers of autonomous content such as text, imagery, audio or video, (i.e. large language models like ChatGPT) and would impose an obligation on providers to disclose a summary of training material that is protected under copyright law.

Limited Risk

At the lower end of regulatory requirements imposed by the Act are limited-risk systems; those which are intended to interact with people. The proposed regulations include informing the user of the AI where it is not obvious (save for law enforcement AI), including where content which appears to be natural has been generated by AI. 

Minimal Risk

These systems can continue development in accordance with existing national and EU law. 

Non-Complicance

Companies which fail to comply with the requirements set out in the unacceptable and high-risk provisions, face penalties of €30 million, or 6% of annual worldwide turnover, whichever is greater. Failure to comply with other provisions of the Act may result in up to €20 million or 4% turnover. 

Key Points from the AI Liability Directive:

In 2022, the Commission supplemented the AI Act with a proposal for an AI liability directive, which is aimed at adapting non-contractual civil liability to artificial intelligence. Liability is an overarching concern for both developers and users of AI, and this Directive is an attempt to reduce that barrier to AI adoption by giving greater certainty to the legal principles which will apply to AI disputes:

Disclosure of Evidence

The first significant step gives Courts in member states the ability to order disclosure of relevant evidence in relation to damage suspected to be caused by high-risk AI systems. It is hoped that this provision will give greater clarity to potential claimant’s on whether or not they have a claim, and prevent the unnecessary cost and time burdens of pursuing the wrong entity. The Commission have sought to balance the rights of potential defendant’s to minimise the costs and time of blanket requests, by ensuring that any such requests are only ordered where:

Presumption of Causality:

In most cases of fault-based liability, it is for the Claimant in proceedings to prove that Defendant breached a duty (i.e. a contractual or statutory duty, or a general duty of care), and that the breach caused damage. The Directive aims to reverse that burden in favour of Claimants, making it easier to seek redress. Where a Claimant can demonstrate that: 1) a defendant has breached a duty set out in national or EU law (for instance one of the obligations set out in the AI Act), 2) that it is likely that the fault has influenced the output of AI, and 3) the output has caused damage, the national court will presume that the breach of the Defendant caused the damage. 

 

This presumption is rebuttable, meaning that if a Defendant can prove that the damage was caused by something other than their breach, the presumption of causality will not apply. Furthermore, the presumption does not apply automatically in circumstances where the Claimant can adequately prove the cause of the damage without the presumption, or in non-high-risk AI cases where the presumption only applies if it is “excessively difficult” to prove the causal link. 

Advantages of the EU approach

The EU is leading the way in setting out comprehensive rules to govern AI, which brings several domestic and international advantages. For the EU specifically, having a centralised system coordinates the policy for all member states, and avoids the possibility of each country diverging in their approach and creating inconsistent and incompatible rules. Having oversight of 27 states may explain why the EU, compared to other countries concerned with only one jurisdiction, are leading the race on regulation. The benefit also extends beyond the Schengen borders. The AI Act will have an extra-territorial effect, meaning any provider who deploys AI in the EU will be bound by the regulations. This will see companies outside the remit of the EU (e.g. those operating from the UK) indirectly required to comply with the EU regulations, even when they are not bound to by their own national rules. For example, a provider of AI from the UK, where there are no strict rules as of yet with regards to AI, may decide to meet the requirements set out in the AI Act as a gold standard, demonstrating a “Brussels Effect” with the EU’s AI policies.

The EU’s approach could also offer more certainty and a more equitable balance for the interests of developers and users. For developers, the Act paints a picture as to their obligations from the very outset, which may be advantageous compared to developing a system in a regulation-free area, only to be hit down the line with legislation or judicial decisions which adversely affect their products. Furthermore, the risk-based approach ensures that the obligations are proportionate, and that lower risk systems are not faced with the same heavy regulations as high-risk ones. For users, it gives greater clarity as to the course of redress should they suffer damage as a result of an AI system. The reversal of a burden of proof and the disclosure of evidence set out under the AI liability directive will build trust with users knowing they have greater protections when bringing a claim, rather than relying on existing legal principles which do not easily apply to AI. 

Drawbacks of the EU’s Approach

The volume and complexity of the regulations suggested could provide a barrier to innovation, the burden of which could outweigh the benefit of the EU market. Although the terms of the Act are yet to be finalised, it is clear that developers will have to comply with lengthy and complex requirements. Complexity in its own right may be enough to put developers off, particularly where other jurisdictions are providing regulation-free environments. However, the regulations also come with significant costs. For instance recruiting new staff to carry out the work or consultancy fees to seek advice both carry significant costs, which will be felt more by SMEs. In any event, the prospect of being fined up to €30 million or 6% of annual turnover for breaches may be enough to put developers off from the outset. 

 

 

The proposed AI Liability Directive will also make developers approach the EU with caution. Consumer protection measures like the presumption of liability or disclosure obligations could see the floodgates open for AI-related claims, adding additional costs burdens to developers before any questions of liability are actually determined.

Conclusion

The European Union’s Artificial Intelligence Act and AI Liability Directive represent a groundbreaking effort to regulate AI through a risk-based approach, aiming to balance innovation with safety and accountability. While the comprehensive framework sets a potential global standard, it also raises concerns about stifling innovation due to its complexity and the associated costs of compliance. As the EU finalises this pioneering legislation, the challenge will be to adapt and evolve these regulations to foster technological advancement while ensuring public safety and ethical considerations.