The Risks of AI

A Scoping White Paper

Paper Navigation

INTRODUCTION

In the modern world of technological advancements, few have promised or delivered more than Artificial Intelligence (AI). The concept itself covers several areas: from optimistic visions of the future, to cautionary tales of robots taking over the world. Regardless of one’s perspective, there is no denying that AI has integrated into modern society, redefining industries, reshaping economies, and reimagining the way we interpret the world around us.

 

However, the ascent of AI is not merely a tale of triumphs. The rise of AI is fraught with potential pitfalls that could threaten to undo its many benefits; the promise versus peril spectrum has put AI at the crossroads of several ethical and legal issues, which will only expand as AI evolves. 

 

AI encompasses a multifaceted suite of technologies, from machine learning algorithms that predict stock market trends to natural language processing tools that translate languages in real-time. Yet, regardless of its form or function, the primary concern remains consistent: how do we harness the power of AI while safeguarding against its inherent risks?

To understand the gravity of the situation, consider this: AI systems are only as good as the data they’re trained on. In a world awash with data, there is often an implicit trust that the data used is unbiased, comprehensive, and accurate. But what happens when that data is flawed? Or worse, what if the data reflects the implicit biases of society at large? The outcome, in many cases, can be AI systems that inadvertently perpetuate or even amplify these biases. Such concerns aren’t just theoretical; they manifest in real-world applications, from facial recognition software with racial biases to hiring algorithms that inadvertently favour one gender over another.

 

But biases are just the tip of the iceberg. As AI systems become more autonomous, questions surrounding liability become paramount. For instance, in the event of an accident involving a self-driving car, who is at fault? The manufacturer of the car? The developers of the AI system? Or perhaps the user who might have overridden the system? These are not just philosophical questions; they have real-world implications that affect insurance policies, legal precedents, and public safety.

Furthermore, the very nature of AI – its ability to process vast amounts of data quickly and in ways that might not always be transparent to humans – introduces concerns around data privacy. The personal information of millions, if not billions, of individuals is being processed daily. This data is the lifeblood of many AI systems, but it also represents risk if exploited by malicious actors. 

 

Legislators around the world are trying to grapple with these issues whilst balancing the interests of developers and users. On one hand, there’s the pressure to innovate and lead in a competitive market, but on the other, there are regulations, ethical considerations, and potential litigation to navigate safety risks. The current position adopted by the UK government favours the former, opening up legal risk to developers and users, with AI causing new issues that aren’t easy to place within conventional legal principles. 

 

In the pages that follow, these challenges will be explored in further detail, with references to real cases, unpacking some of the key legal risks that AI presents and the implications for both developers and users. Through this exploration, we aim to raise awareness to the scope of AI risks so that readers may be better informed as to potential issues that can arise.

Data Privacy and Protection

The primary legal instruments governing data protection in the UK are the General Data Protection Regulation (GDPR) and the UK Data Protection Act 2018. These laws are designed to safeguard personal data and uphold the rights of individuals by regulating how organisations obtain, use, and manage personal information. They stipulate principles such as lawfulness, fairness, transparency, storage limitation, integrity, and confidentiality, which set out what data controllers must do, and how data subjects can enforce their rights.

 

Under these regulations, entities that process personal data must (amongst other things) ensure the consent of data subjects, provide clear information about data use, protect the data from unauthorised access, and enable individuals to access, rectify, and erase their data. Moreover, they must conduct Data Protection Impact Assessments for high-risk processing and report data breaches within stipulated time frames.

 

Many AI systems require personal data, and vast quantities obtained, stored and processed throughout its life-cycle. This can be acquired in various different ways. Direct user input is a common method, such as when users interact with AI chatbots or virtual assistants. However, indirect collection also occurs through the observation of user behaviour on digital platforms, as well as using third-party providers to generate new data by making inferences from existing datasets.

 

The opaque and complex nature of the composition of AI systems presents an inherent risks in complying with the vast provisions of the GDPR and Data Protection Act: 

Consent and Transparency:

An individual must consent to their personal data being obtained, processed and stored. In some circumstances, a user may give express consent, such as agreeing to standard terms and conditions. However the consent requirement may not always be so clear. For example, if an AI system uses data scraping tools, where vast amounts of data are obtained from third parties, there may be circumstances where the consent relied upon (i.e. that given to the third party) does not extend to the AI system itself. Furthermore, the algorithms that are used within AI systems are often complex, and difficult for an average person to understand. Therefore, even in circumstances where consent has been given by a user, AI systems may contravene the “informed consent” principle set out under the GDPR, meaning AI developers could be processing personal data unlawfully. The issue of complexity also extends to transparency obligations; users must be informed about how their data is being used in a clear and transparent manner, which may not always 

Data Minimisation:

The GDPR stipulates that controllers should minimise the personal data they collect, store and process to that which is relevant and necessary for a specific purpose. The design of many AI systems inherently favours the accumulation of extensive data, and are often programmed to continuously gather and process information to refine their algorithms and improve accuracy. However, collecting and processing large volumes of data can easily conflict with the data minimisation principle; AI systems might gather more data than is strictly necessary for their intended purpose.

Automated Decision Making:

The GDPR also offers protections to data subjects regarding automated processing. Subject to certain exemptions, individuals have the right to not be subjected to decisions based solely on automated processing (e.g. profiling) which cause adverse effects. AI, by its very nature, has automated processing at its core; AI systems are often designed to make decisions or predictions based on the processing of large datasets. These decisions can range from personalised recommendations to more impactful determinations such as credit scoring, job applications, and healthcare diagnoses. An individual’s rights against automated processing are clearly at risk in such circumstances, especially where the system is absent of any safeguards such as human oversight. 

Security:

Data security is a critical component, requiring that personal data be processed in a manner that ensures its security, including protection against unauthorised processing,  accidental loss or damage. The sheer volume of data processed by AI systems increases the risk of data breaches. More data means more potential points of vulnerability. The handling of large datasets increases the complexity of securing data, making it more challenging to maintain stringent security measures. Furthemore, the aforementioned issue of complexity also presents security risks; AI algorithms can be complex and may contain hidden vulnerabilities that are not immediately apparent, even to the developers. These vulnerabilities can lead to accidental leaks, or even cyber attacks, resulting in unauthorised access or manipulation of personal data. 

Intellectual Property Concerns

Intellectual Property law in the United Kingdom encompasses a set of legal protections for various forms of intellectual creations and innovations, which is spread across various legal concepts, the most common of which are copyright, patents, and trademarks. Copyright law grants creators exclusive rights over their original literary, artistic, musical, and dramatic works. This includes protection for written texts, software, music, films, and other creative content. Patent law, however, provides exclusive rights to inventors for their new and innovative inventions. Trademark law on the other hand safeguards distinctive signs and symbols that represent goods and services, enabling consumers to identify and distinguish products in the marketplace. 

 

By having these protections in place, individuals and companies can have confidence that their creations are recognised and protected against reproduction by third parties, with well established routes to limit the damage caused by IP breaches. For instance, those who have had their IP breached can go through litigation to obtain various remedies, such as injunctions, damages, and even the destruction of the infringing property. 

 

However, the rapid development of AI opened the door to a wave of new issues within intellectual property:

Data Collection and Training:

As already mentioned, AI systems are heavily data-driven, and rely on vast quantities to carry out their function. We have already seen how this can include obtaining personal data, and the potential consequences that might follow. However, these datasets can often include copyrighted material, such as texts, images, or audio, obtained by the vast scraping of data from websites, social media, news agencies and beyond. Here the first main issue at the training phase occurs: are companies scraping copyrighted material without the consent of their owners, and if so, to what extent does this constitute a breach of copyright law? 

Content Generation:

After the training phase, AI systems use their training data in various ways, which could elongate the litigation risk further. The most familiar way will be for content generation platforms like ChatGPT or Midjourney, where users enter prompts for the AI to generate specific content. However, when using the data patterns that they have been trained on, these systems can generate texts, images or music that closely resembles the copyrighted work. Alternatively, entirely new and indistinguishable content can be made, but that which was only possible because of the use of the copyrighted material in the first place. The net effect in both cases is that original authors could find themselves competing with their own work. Naturally, this then has the potential to induce lawsuits for the misuse of copyrighted material.

Discrimination and Bias

Perhaps with a more widespread application than the previous section, AI has the potential to perpetuate systemic biases in their outputs, leading to the possibility of harmful, discriminatory effects on ordinary individuals. As AI expands its scope across several industries, the potential for discrimination increases. 

 

The UK’s anti-discrimination law is set out in the Equality Act 2010. Pre-2010, there were various statutes addressing separate discrimination issues within the workplace and society generally. The 2010 Act harmonised these into one single piece of legislation. It sets out, amongst other things, what types of discriminatory behaviour is prohibited, the personal protected characteristics and various routes to enforcement of the obligations. The sections on discrimination have a wide application, but the key aspects are as follows: direct discrimination (treating someone less favourably because of a protected characteristic), indirect discrimination (a policy or practice that applies to everyone but disadvantages a group with a protected characteristic), and harassment and victimisation related to a protected characteristic. The main cause for concern with AI and discrimination is the possibility to inadvertently produce biassed or discriminatory outcomes. There are several ways in which AI risks contravening these statutory obligations, or alternatively, create nuanced issues which are not fully addressed within the legislation:

Data Bias:

As we have already seen, data is the lifeblood to AI systems, and are trained on vast quantities of such (such as personal data, or IP protected material). Using complex algorithms, systems learn from the data to carry out their function. This presents potential risk of discrimination where the data sets on which AI systems are trained on are biassed, particularly where the output carries out a human-like function (e.g. decision making), as the output can reflect the bias displayed in the data. If personal data is obtained from a specific demographic such as individuals of a certain ethnicity, wealth, industry, location etc, the output may not be reflective of the wider and diverse context in which the system is being applied. For example, a system designed to screen job applications might have a greater risk of perpetuating unfair outcomes for female applicants if predominantly trained on male data. The risk is not limited to replication of biases, but in some cases AI can amplify them. Algorithms often seek patterns to make predictions or decisions, and in so doing, they risk over-emphasising existing biassed aspects present in the training data, increasing the scale of the biassed outcome. 

Design Flaws:

Even in circumstances where the training data is absent of obvious biases, the design of the AI system and flaws within present a risk of producing discriminatory outcomes. At the heart of AI design are the modelling choices and assumptions made by developers. These choices include selecting which variables to include in the model and how to weight them. If these choices are based on flawed assumptions or biassed perspectives, the model may prioritise certain groups over others. For example, if an AI hiring tool is designed to favour candidates with certain educational backgrounds without considering the socio-economic factors that influence educational opportunities, it can lead to discrimination against candidates from less privileged backgrounds.

 

The choice of algorithm itself can be a source of bias. Some algorithms may be more prone to bias than others, especially those that heavily rely on historical data patterns, like certain types of machine learning models. Furthermore, the systems based on machine learning often lack the ability to understand the broader context of the data they process. They are typically designed to identify patterns within the data they are trained on, without an understanding of the social, cultural, or historical context, or the ability to adapt to changes to those contexts in real time. This can lead to misinterpretations and biassed decisions.

Misinformation and Inaccuracy

The outputs produced by AI systems have the potential to revolutionise the way in which individuals carry out basic functions, from summarising research material to producing draft documents in a fraction of the time. However, this benefit is often offset by the risk of hallucinations, a colloquial term for inaccurate and misleading information produced by AI systems. This can occur in various forms, from generating non-existent data to providing incorrect or biassed interpretations. Misinformation, similarly, involves the dissemination of false or misleading information, but often with the implication of human involvement or error in the AI’s learning process.

 

The root causes of these issues are multifaceted. They stem from the limitations in AI training data, which (as set out above) might contain biases, inaccuracies, or lack diversity. Additionally, AI models, particularly those based on machine learning, often struggle with context understanding, leading to errors when interpreting complex or nuanced scenarios. The absence of real-world understanding or common sense further exacerbates these challenges, resulting in AI outputs that are plausible yet factually incorrect. Without any human intervention on either side, it is easy for users to find it difficult to distinguish fact from fiction. 

 

The effects of hallucinations differ depending on the context in which they are applied. For instance, a student using ChatGPT for research purposes may be left a little red-faced if they stand up in class and recite a fact, which ultimately proves to be untrue; here the risk of misinformation is relatively mild. However, there are live examples (detailed below) where relying on written misinformation can have far more serious consequences, such as when it is relied upon in regulated professions, or used to mislead the reader into believing false information about another. The consequences are even greater when AI is used in other contexts. In the realm of social media and news, AI-generated misinformation can contribute to the spread of fake news, influencing public opinion and even elections. Moreover, in areas like autonomous driving, hallucinations can have immediate safety implications. In healthcare, also, where AI has already shown to provide inaccurate diagnosis or treatment suggestions, risking detrimental consequences to patients. 

Liability and Accountability

Perhaps the most unilateral issue underpinning all of the risks listed above, is the uncertainty surrounding liability and accountability. The legal system in the UK is a compilation of various pieces of statute, and case law dating back centuries in some cases, all of which has paved the way for established principles of law: contract, negligence, consumer rights to name a few. Though the law is never certain, both ordinary individuals and companies have an established set of rules, and a clear path through the judicial system to enforcement to make decisions where the lines are not clear.

 

All legal precedents and developments to-date have had one thing in common: human decision making at its core. AI however presents an entirely novel issue: autonomous decision making. We have seen above just some of the ways in which AI poses the risk of harm, all of which poses the question of what happens when that harm is actually caused. Who is ultimately responsible for the actions of an AI system? Where does the fault lie? Does the fault actually cause harm? How do ordinary people take action against a harmful outcome? How do they prove something actually did go wrong? 

 

All of these issues merely scratch the surface of the liability and accountability issues surrounding AI. Legislators are constantly grappling with the balance between having laws in place which protect individuals from harm, without providing a barrier to innovation and stifling the benefits that AI can bring. The EU is at the forefront of setting down rules to mitigate the risks of AI. However the UK have adopted a pro-innovation approach, and whilst they acknowledge the litigation risks with AI, they have proposed to use existing legal frameworks to deal with AI litigation. The difficulty faced is that the issues that arise out of AI disputes do not neatly fall into existing legal principles.

Apportioning liability:

In regular day-to-day disputes, there are well established laws and principles that govern relationships between parties. For instance, a contractual relationship between employer and employee or business and supplier, or duties of care established in the common law and statute. Disputes arise when legal obligations are breached, and the wronged party has a cause of action against wrongdoer Wrongdoers are legal entities which can be pursued in court: individuals, businesses, companies, government, insurers, all of whom have individuals which are ultimately accountable for that entities’ actions.

 

AI in and of itself is not a legal entity, yet they have the capabilities to make autonomous decisions which can directly cause harm. Furthermore, there isn’t any legal precedent to determine which legal entity is ultimately responsible when AI causes harm. This is largely due to the fact that there are several stakeholders in AI development, and it is difficult to determine which party of the life-cycle eventually causes the undesired harm: code writers, algorithm developers, training engineers, data sources, data scientists responsible for configuration, operators and the owners all play a part in the AI lifecycle. Matching an undesired outcome to a fault within the development is near impossible in comparison to regular, human-to-human disputes. Prospective claims therefore face an uphill battle at the first litigation hurdle: who do I claim against? 

Establishing a breach:

Like the absence of a legal entity that is responsible for an AI system’s actions, there is also a lack of precedent to determine standards that AI systems must adhere to. The EU is at the final stages of approving the AI Act, an EU Regulation which proposes to impose several statutory duties on AI developers, and sets out the ways in which individuals can take action if harm is caused. In the UK however, there are no AI-specific duties imposed on developers, nor is there any statutory regulation on the horizon, with the government option to rely on existing legal frameworks. 

 

There are several examples where AI would not fit so neatly into established legal principles. For instance the Consumer Protection Act 1987 sets out the law on product liability, namely that someone who suffers loss due to a defective product has a cause of action in damages against the producers and suppliers of that product. Establishing a “defect” in an AI system is not clear cut. it is difficult to pin-point an actual defect in the system, and even then it is difficult to say that a defect actually caused harm, for instance where an undesirable outcome is caused due the autonomous nature of the system rather than a defect in design. Furthermore, the element of “foreseeable harm” is a key component to the test in negligence. Applying what is “foreseeable harm” is challenging with AI. Hallucinations occur (as set out above), and often the learning process has undesired outcomes, which are near impossible to foresee. Therefore even if the first hurdle of establishing a legal entity to pursue is overcome, litigants face the further challenge of establishing that a breach has actually occurred. 

Evidential issues:

In most cases, the party bringing their claim through the UK courts bears the burden of proof: they must use evidence to prove their case is made out on the balance of probability. Evidence can be collected personally, e.g. by keeping copies of correspondence, photographs, a timeline of events, or witness evidence at a trial. Alternatively, there is a duty of disclosure within litigation, allowing access to documents in the opponent’s possession which may assist their case. All of these documents are relatively straightforward to ascertain. 

 

AI systems, however, operate in black boxes. This term refers to AI systems whose internal decision-making processes are not transparent or easily understandable, even to their creators; the complexity and multitude of parameters make it difficult to trace and understand how specific inputs lead to certain outputs. Without this clear chain, the lack of transparency will make it difficult for people bringing claims to actually gather the evidence they need to prove that a breach occurred.

Conclusion

This white paper has aimed to set out some of the key risks posed by AI. Whilst not-exhaustive, the list of issues above highlight some of the most pressing and imminent difficulties that AI is posing in real time. 

 

Data protection emerges as a paramount concern. As AI systems inherently rely on vast datasets, the risk of breaches and misuse of personal data is significant. The protection of this data, while ensuring compliance with evolving privacy regulations, presents a formidable challenge for AI developers and users. It is vital that these entities not only adhere to current legal frameworks but also anticipate future regulatory shifts, embedding robust data governance and ethical considerations into the very fabric of AI systems.

 

Intellectual property issues, particularly in the context of AI-generated content and innovations, highlight the necessity for an updated legal framework. The traditional paradigms of authorship and invention are being challenged, necessitating a rethinking of how intellectual property laws are applied in the age of AI. This calls for a balance between encouraging innovation and protecting the rights of creators, be they human or AI.

 

The risk of discrimination, unintentional or otherwise, in AI algorithms is a stark reminder of the inherent biases that can be embedded in these systems. This necessitates a concerted effort towards developing AI in a manner that is fair, transparent, and accountable. Ensuring diversity in AI development teams and datasets, alongside rigorous testing for bias, is crucial in mitigating this risk.

 

Furthermore, the proliferation of AI in the dissemination of information presents the dual challenge of combating misinformation while respecting freedom of expression. AI’s role in amplifying or curbing misinformation is a testament to its power and influence, underscoring the need for ethical guidelines and oversight mechanisms in its application, particularly in sensitive areas such as news dissemination and social media.

 

Finally, and linking the four themes above, the exploration of litigation risks associated with AI brings to light the complexities of apportioning liability and accountability in an AI-driven world. The legal system, grappling with AI’s unique attributes, must evolve to address these new challenges. This involves not only adapting existing laws but also potentially creating new legal frameworks that can accommodate the intricacies of AI technology.

 

While AI presents unprecedented opportunities for advancement and innovation, it also brings with it a spectrum of risks that need to be carefully managed. Addressing these risks requires a collaborative effort that spans industries, disciplines, and borders. It calls for the active involvement of policymakers, legal experts, technologists, ethicists, and the public at large. Whilst the UK waits in their consultation of what AI regulation will look like, users and developers must be aware of the ever evolving risk, which will no doubt pose novel and complex legal challenges in the interim. 

 

(PLEASE NOTE: NOTHING WITHIN THIS PAPER IS INTENDED TO BE LEGAL ADVICE. THE PARAGRAPHS HEREIN ARE INTENDED ONLY TO RAISE AWARENESS TO A NON-EXHAUSTIVE LIST OF POTENTIAL LEGAL RISKS OF AI, SO THAT READERS CAN TAKE FURTHER ADVICE OR RESEARCH AS NECESSARY).