“OpenAI face Defamation Claim”​

The Case

OpenAI has recently been sued for defamation in the United States, adding another piece to the puzzle of artificial intelligence in litigation. This case, filed by a radio host in Georgia named Mark Walters, is the first of its kind, and it raises important questions about the legal responsibilities of AI developers and the potential risks posed by AI systems.

 

The claim arises from a statement made by ChatGPT, in response to a request from a journalist named Fred Riehl. Riehl asked ChatGPT to summarise a case which he had been researching, and in response, it falsely claimed that Walters had been accused of defrauding and embezzling funds from a non-profit organisation.

 

Upon two further requests by Riehl, ChatGPT provided an extract, and the full text, of the claim against Walters. When Riehl reached out to the Claimant’s in the case, it became clear the claim against Walters and the text provided was entirely fabricated; Walters had never been accused of these crimes – he wasn’t even a party to the previous claim. The case was filed on 5 June 2023, with Walters seeking unspecified monetary damages from OpenAI.

Key Issues

Firstly, this case exemplifies how problematic large language models can be. These systems, while capable of generating useful and creative content, cannot reliably distinguish fact from fiction. They can invent dates, facts, and figures, which can mislead users and, in some cases, cause harm. In this case, ChatGPT provided a user with completely fabricated information on three separate occasions, about a real legal case. Had Riehl published the information he received, it would undoubtedly have caused significant damage to his and his Publisher’s journalistic reputation, whilst also damaging the reputation of Walters, a public figure. 

 

Riehl sought to validate the information with more reliable sources before he took any action, but it’s entirely foreseeable that a less vigilant user would place total reliance on AI responses when making important decisions such as  investments, legal decisions or contractual negotiations. OpenAI does include a disclaimer on ChatGPT’s homepage warning that the system “may occasionally generate incorrect information,” but the company also presents ChatGPT as a source of reliable data. This discrepancy raises questions about the ethical responsibilities of AI developers and the need for clearer guidelines and regulations in the field.

 

Secondly, this case tests the boundaries of defamation law in the context of AI. (link to defamation doc on separate page). In the UK, defamation has evolved through the common law. Although the tort can be engaged through various forms, be it in the familiar form of spoken word or written text, or the more niche examples of art, songs or plays, the law to date has one thing in common – the defamatory statement  is ultimately made by a human. The concept of an AI model creating a defamatory statement poses several novel questions: are such statements actionable under the current law? If so, where does liability fall?

 

Should this case proceed to trial, the outcome could have far-reaching implications for the future of AI and its legal status. It could set a precedent for how AI developers are held accountable for the actions of their creations, and could influence the development of regulations and standards in the AI industry. As AI continues to evolve and become more integrated into our daily lives, it is crucial that we continue to scrutinise and address these legal and ethical challenges.

Discussion Questions

Legal Responsibility

Should AI developers be held legally responsible for the false information generated by their AI systems? If so, to what extent?

Regulation and Oversight

What kind of regulations or oversight might be necessary to prevent AI systems from generating harmful false information? How could these be implemented without stifling innovation?

Ethical Considerations

What ethical considerations arise from the use of AI systems that can generate false information? How can these be addressed?

User Awareness

How can users be made more aware of the potential for AI systems to generate false information? What role should AI developers play in this?

AI and the Law

How might the law need to evolve to address the unique challenges posed by AI? Are existing laws sufficient, or are new laws needed?

Future Implications

How might the law need to evolve to address the unique challenges posed by AI? Are existing laws sufficient, or are new laws needed?

AI Transparency

How important is transparency in AI systems in preventing such issues? How can AI systems be made more transparent?

AI Literacy

How can we improve AI literacy among the general public to ensure they understand the potential risks and limitations of AI systems?