UK AI Regulation

The latest development in the UK’s approach to AI Regulation has been set out in the Department for Science Innovation and Technology (DSIT) White Paper, “A pro-innovation approach to AI regulation,” published in March 2023. DSIT acknowledges the enormous potential of AI, as well as the serious risks. AI has the potential to create novel issues which cannot always be adequately addressed using current legal frameworks, thus necessitating the need for regulation. Regulation itself requires striking a fine balance between innovation and safety. As the title suggests, the balance currently lies in favour of innovation (for now). This article will analyse the key elements of the UK’s approach to AI regulation, and how it proposes to meet their key goals: driving growth, increasing trust, and becoming a global leader in AI. 

Key Developments

Defining AI

Unlike other jurisdictions, the UK has not sought to define “artificial intelligence” in strict, ridgid terms with reference to the specific technology. Alternatively, they define it by its characteristics: it is technology that is adaptive and autonomous. Although this is quite a far-reaching and ambiguous “definition” the paper makes clear it will be under continuous review as our understanding of AI and the technology itself develops. 

Regulatory framework

The UK have not proposed any specific or significant changes to law, or any other regulations and rules. What they have proposed is effectively delegating AI governance to the 90+ industry regulators, who will be expected to apply cross-sector principles to regulate the use of AI within specific industries, rather than the technology itself. The principles proposed are as follows:

How each regulator will apply these principles remains to be seen, but the government will not be introducing any statutory framework for the time being. Instead, regulators will use the principles (with Government support) to issue guidance, best practices and other resources within their existing powers. However the paper anticipates the need for statutory intervention at some point in the future, particularly in relation to imposing a duty on regulators to have due regard for the AI framework principles above. The paper also proposes establishing a “Central Risk Function,” to oversee the implementation across all sectors, and tasked with monitoring and feedback exercises between regulators and government. 

Liability

The ability for AI to adapt by learning new patterns and making autonomous decisions presents a new challenge in assigning liability. The stakeholder chain in AI development is complex, and unlike conventional disputes, determining the chain of causation of an undesirable AI output is ambiguous, making it challenging to determine ultimate liability. However, this issue is continuously evolving and far from being fully understood. Therefore, there are no proposed changes to deal with AI liability, with the hope that current legal frameworks will suffice whilst the government focuses on developing innovation. Instead, the government proposes using AI “sandboxes” to support innovators in getting their products to market. 

Advantages of the UK’s Approach

The balance between innovation and safety is a hard one to strike. Too much regulation would provide barriers to AI developers, deterring them from developing AI in the UK. However, too little regulation opens the door for AI to cause harm, undermining the trust in AI systems and their integration into society. The UK’s “pro-innovation” approach, centred on abstaining from introducing strict rules and instead opting for “sandboxes” and industry-led guidance, provides a safety net for AI developers; the ability to try and test AI systems without the need for compliance with strict rules, or potential penalties outside of the regular legal framework. No doubt innovators will favour the UK as an attractive destination to develop AI systems, and could pave the way for the UK’s goal to become a global AI leader. 

 

Focusing on use, rather than the technology itself, also seems to be a proportionate way to regulate AI. AI systems are varied in their use and the risk they pose, for instance an autonomous vehicle or a medical diagnosis tool has far greater potential to cause harm than a chatbot dealing with customer service. It would not be proportionate to apply the strict rules necessary for the former to the latter. Similarly, there is an argument that allowing regulators are better suited to apply the AI principles above; each sector has different needs, challenges and considerations, and it would be difficult to account for that in a rigid, centralised framework. 

 

The UK’s approach also offers flexibility. Be it in the definition or the principles set out, the UK retain the ability to change their approach as new challenges arise with the rapid evolution of AI. Imposing strict rules or legislation would take time to carefully consider and implement, by which time they may no longer be fit for purpose given the vast rate at which AI is evolving.

Drawbacks of the UK’s Approach

The UK’s emphasis on innovation is problematic for achieving their goals of safety, fairness and redress. Whilst not bringing forward legislation will be welcomed by innovators, AI is already causing real issues for real people (as our news section highlights). Be it people’s data, artistic works, or general safety, AI is causing damage on a day-to-day basis, with limited (if any) course of redress by those suffering. Existing legal mechanisms offer some protection, but it is clear that AI creates new issues that do not fit into established legal principles. As AI develops, the impact felt by real people will only increase. 

 

The reliance placed on Regulators could also cause several issues. Being tasked with regulating AI for their respective industries is a heavy burden. The development, evolution and rollout of AI is rapid and unpredictable, and Regulators will be stretched to keep up, particularly when they are limited by their existing remit and powers. Furthermore, each Regulator steering their own course creates a greater risk of inconsistency where regulatory remits overlap. Where an innovator has to meet concurrent or conflicting requirements from separate regulators, the pro-innovation goal is somewhat undermined, and could act as a deterrent for innovators.

How Have Regulators Responded?

There is a degree of consistency with how some of the regulators have responded to the White Paper. The Information Commissioner’s Office (ICO) and the Competition and Markets Authority (CMA) both welcomed the idea of putting regulators in the driving seat with AI regulation, as they are better placed to address the specific challenges they will face in their respective domains. They also welcome the non-statutory basis of the AI principles above (for now at least anyway), and the central function that is proposed to improve consistency amongst regulators. 

 

However, the main criticism circles the concepts of transparency and accountability. The CMA expressed concern with consumer protection implications, particularly the lack of accountability in certain AI systems, which may make it difficult to assign liability (and modes of redress) using consumer protection principles in their current format. The Law Society echoed this concern, calling for explicit regulations to delineate AI accountability by having firm legislation which targets high-risk systems, to balance the principles-based regulation.

Conclusion

The UK’s approach to AI regulation highlights the conflict between allowing innovation to flourish whilst mitigating the risks posed by AI. The flexible, principles-based approach has given a head start to innovation, leaving some gaps in safety and fairness concerns, particularly with regard to accountability in AI disputes. This constant battle between innovation and safety will need to be under constant review as the UK’s approach to regulation develops with the rapid evolution of AI.