Lawyers Fined $5,000 for Citing Fake Cases

The Case
A personal injury claim was brought against Avianca Airlines for damages arising from an incident on a flight in 2019. The airline disputed the claim, requesting the matter be dismissed due to the expiry of the limitation period. In their response, the Claimant’s lawyers cited several precedent-setting cases in support of their position. However, the cases and their precedents were fabricated responses generated by ChatGPT in response to the lawyers’ searches. The Claimant’s case was dismissed (although not seemingly due to the lawyer’s conduct, but due to limitation expiring), and the lawyers were ordered to attend a sanctions hearing, where they ordered to pay $5,000.
Key Issues
This incident demonstrates the issue of relying on information produced by AI models; it can be inaccurate, misleading, or even (as in this case) completely fake. The case resulted in breaches of the stringent ethical and regulatory obligations on lawyers; the Judge commented that the lawyers had “consciously avoided” signs that the cases were fake, consequently deeming this as bad faith and misleading the court, which justified the heavy sanction.
Although the lawyers’ conduct did not affect the outcome of their client’s case, it is plausible to envisage a situation where a lawyer, with a trial in the balance, adopts ChatGPT in the same way as this case, but to a much more detrimental effect. Where a client lost their case directly because of these actions, the regulatory sanctions would no doubt be more severe, in addition to the client seeking appropriate redress by way of a negligence claim.