AI News Stories

The Verge

“OpenAI face Defamation Claim”

OpenAI has recently been sued for defamation in the United States, adding another piece to the puzzle of artificial intelligence in litigation. This case, filed by a radio host in Georgia named Mark Walters, is the first of its kind, and it raises important questions about the legal responsibilities of AI developers and the potential risks posed by AI systems.

“ChatGPT Essays - Academic Integrity vs Adapting to AI”

Recent events at Texas A&M University highlights the potential problems caused by using AI in education settings. A tutor at the University who accused his students of using ChatGPT to write their essays; he used ChatGPT himself to copy the student essays into the system, asking it to identify if it [ChatGPT] had written the assignments, and failing those which it identified as matches (despite ChatGPT not being an AI detector). This case did not appear to lead to any drastic implications for the students or the tutor, but it does present potential areas for disputes to arise

The Washington post

Forbes

Lawyers Fined $5,000 for Citing Fake Cases

Lawyers in New York have been fined for citing false cases in court, all of which were generated by ChatGPT. A personal injury claim was brought against Avianca Airlines for damages arising from an incident on a flight in 2019. The airline disputed the claim, requesting the matter be dismissed due to the expiry of the limitation period. In their response, the Claimant’s lawyers cited several precedent-setting cases in support of their position. However, the cases and their precedents were fabricated responses generated by ChatGPT in response to the lawyers’ searches. The Claimant’s case was dismissed (although not seemingly due to the lawyer’s conduct, but due to limitation expiring), and the lawyers were ordered to attend a sanctions hearing, where they were ordered to pay $5,000 in fines. 

GitHub Copilot Intellectual Property Litigation

GitHub Copilot, a product launched in June 2021 by GitHub and OpenAI, is an AI-based tool designed to assist software developers by suggesting or filling in blocks of code using AI. The product, which charges users for its services, has been accused of violating various licences by producing copyrighted materials. The training data for Copilot includes vast numbers of publicly accessible data archives on GitHub, many of which are subject to open-source licences. These licences often require attribution to the original authors, which Copilot allegedly omits. The claim suggests that Copilot’s operation results in software piracy on an unprecedented scale.

The Verge 

Court Documents

Stability AI in Legal Battle Over AI and Copyright Infringement

The claim filed in January 2023 against Stability AI Ltd., Stability AI, Inc., Midjourney, Inc., and DeviantArt, Inc, further highlights the capacity for AI to disrupt conventional intellectual property disputes. Those who bring the claim allege unauthorised use of copyrighted images to train AI image generators, which infringes upon the rights of millions of artists. This case, if successful, could pave the way for group litigation on an unprecedented scale.

Clearview AI Faces €20 Million Penalty in France

In late 2022, Clearview AI, a company that uses artificial intelligence to provide facial recognition services, was hit with a €20 million penalty by the French data protection authority, CNIL. The penalty followed a formal notice from CNIL, which Clearview AI failed to address. The company was ordered to stop collecting and using data on individuals in France without a legal basis and to delete the data already collected.

The French data protection authority (CNIL) 

BBC News

AI-Generated Child Abuse Images: A Disturbing New Frontier in AI Misuse​

The Internet Watch Foundation (IWF) has recently confirmed the emergence of a deeply concerning trend: the creation and online sharing of AI-generated child sexual abuse imagery. This development marks a new and disturbing misuse of artificial intelligence technology, raising serious questions about the ethical implications of AI and the need for stringent regulatory measures.

Uber face AI facial recognition discrimination claim

Uber Eats is at the centre or an Employment Tribunal case where the risks of AI are at its centre. The claimant was a courier for Uber Eats. He was deactivated from the Uber Eats platform after Uber’s facial recognition system failed to verify his identity. The system uses AI and facial recognition technology to confirm the identity of couriers before beginning a shift; the Claimant argues that the system’s failure to recognise him was due to his race, and that the subsequent deactivation was discriminatory.Uber Eats argue that the deactivation was not due to racial discrimination but was a result of the claimant’s failure to comply with the company’s delivery standards. They also stated that the system was not capable of racial bias.

Employment Tribunals

The Verge

Getty Images vs. Stability AI: Another case of AI and Copyright Infringement

Earlier this year, Getty Images filed a lawsuit in the US against Stability AI. The case revolves around Stability AI’s alleged infringement of Getty Images’ intellectual property rights by copying more than 12 million photographs from Getty Images’ collection, along with the associated captions and metadata, without permission or compensation. It was also reported that Getty issued a similar claim in the UK, to prevent Stability AI selling its products in the UK.

Schumacher AI Interview

Die Aktuelle, a German tabloid magazine, has found itself open to legal action having published an AI-generated “interview” with Michael Schumacher, posing as the first interview given by the former F1 driver since 2013. The magazine ran a front cover spread promising an exclusive interview with Schumacher, which was later revealed to be fabricated using AI. 

The Guardian

The Verge

AI Music Clones Chart Stars

The controversial use of AI systems to create new artistic content has spilled over into the music industry. There has been a growing trend of AI systems creating music featuring famous musicians covering other artists’ songs, or even creating entirely new songs in some cases. Music industry power players are already getting AI-generated music pulled from streaming services, citing copyright infringement. Yet, the legal argument is far from straightforward.

Musk vs. Microsoft

Elon Musk has threatened to take legal action against Microsoft over allegations that they “trained illegally using Twitter data.” The threat appears to be linked to OpenAI’s use of Twitter data to train the large language model behind products like ChatGPT. While OpenAI is not Microsoft, it recently received a significant investment from the company, which is integrating AI into tools like Bing, Edge, and Microsoft 365.

CNBC

Court Documents

​​Facial Recognition AI and Wrongful Arrest:

In 2021, an individual named Robert Julian-Borchak Williams brought a claim for wrongful arrest and imprisonment, due to the misuse of facial recognition technology by the Detroit Police Department. The case highlights the significant flaws and potential dangers of relying on facial recognition systems, especially when used as the primary evidence for arrest.

Meta vs. Voyager Labs.

Meta Platforms, has initiated legal action against Voyager Labs, a company that promotes itself as “a world leader in advanced AI-based investigation solutions.” The core of the dispute revolves around allegations that Voyager Labs created tens of thousands of fake Facebook accounts to scrape user data, subsequently offering surveillance services to its clientele. Voyager Labs’ services involve analysing vast amounts of social media posts to make assertions about individuals, including for law enforcement purposes, such as offering predictions of which individuals might commit crimes in the future.

The Verge

Thomson Reuters

AI-Powered Investments: Tyndaris v VWM Case

In 2017, Tyndaris SAM (an investment manager), contracted with MMWWVWM Limited (VWM) to make investment decisions for VWM using an AI system. VWM wanted a fund which would work without human intervention, eliminating any emotional or biassed decisions. The K1 supercomputer was designed to analyse real-time information across news channels, social media, and other sources to predict market trends, and use this information to make the most financially-sound investments. Despite extensive testing, the system led to significant losses for VWM, amounting to approximately US $22 million. As a result, VWM demanded a suspension of trading. In retaliation, Tyndaris claimed around US $3 million from VWM in unpaid fees, with VWM counterclaiming their loss, alleging misrepresentations by Tyndaris regarding the K1 supercomputer’s capabilities. Whilst this case settled out of court, it exemplifies the ability of AI to disrupt standard contractual agreements, and the potential for increased litigation in the future. 

Amazon's AI Recruitment Tool Scrapped for Gender Bias

It has been reported that Amazon have had to abandon an artificial intelligence recruitment tool after it was found to be biassed against female candidates. The system was trained on data submitted by job applicants over a decade. A significant portion of this data came from male candidates which caused the system to  develop a bias, favouring male candidates over females. Despite efforts to rectify the bias, the project was eventually scrapped. However, for a period, recruiters at Amazon did use the tool for recommendations but never solely relied on it for final decisions.

BBC news

Advisory board

IBM's Watson and the Challenges of AI in Healthcare

IBM’s Watson for Oncology, a machine-learning system designed to recommend cancer treatments, came under scrutiny after it was found to suggest unsafe and incorrect treatments for patients. This software uses artificial intelligence algorithms to recommend cancer treatments tailored to individual patients. It claims to base its recommendations on data from real patients and is used by over 200 hospitals globally. However in 2017, it was revealed that the system often recommended treatments that were not suitable for patients and inconsistent with national treatment guidelines in the US.

Automated Risk-Scoring for Benefit Claims

The BBC have reported that the Department of Work and Pensions (DWP) is facing calls for greater transparency regarding its plans to expand the use of Artificial Intelligence in risk-scoring benefit claims. The DWP aims to leverage AI to identify potential fraudulent claims, particularly in the context of Universal Credit. However, campaigners and watchdogs have raised concerns about potential biases, lack of transparency, and the implications of AI-driven decisions on claimants.The system employs machine learning to analyse historical benefits data, and predict the likelihood of a new claim being fraudulent or incorrect.The DWP has disclosed intentions to pilot similar AI models in areas with high overpayment rates, such as undeclared earnings from self-employment and incorrect housing costs.

BBC news