Skip to main content

NY lawyers fined for using fake ChatGPT cases in legal brief

The clumsy use of ChatGPT has landed a New York City law firm with a $5,000 fine.

Having heard so much about OpenAI’s impressive AI-powered chatbot, lawyer Steven Schwartz decided to use it for research, adding ChatGPT-generated case citations to a legal brief handed to a judge earlier this year. But it soon emerged that the cases had been entirely made up by the chatbot.

U.S. District Judge P. Kevin Castel on Thursday ordered lawyers Steven Schwartz and Peter LoDuca, who took over the case from his co-worker, and their law firm Levidow, Levidow & Oberman, to pay a $5,000 fine.

The judge said the lawyers had made “acts of conscious avoidance and false and misleading statements to the court,” adding that they had “abandoned their responsibilities” by submitting the A.I.-written brief before standing by “the fake opinions after judicial orders called their existence into question.”

Castel continued: “Many harms flow from the submission of fake opinions. The opposing party wastes time and money in exposing the deception. The court’s time is taken from other important endeavors.”

The judge added that the lawyers’ action “promotes cynicism about the legal profession and the American judicial system.”

The Manhattan law firm said it “respectfully” disagreed with the court’s opinion, describing it as a “good faith mistake.”

At a related court hearing earlier this month, Schwartz said he wanted to “sincerely apologize” for what had happened, explaining that he thought he was using a search engine and had no idea that the AI tool could produce untruths. He said he “deeply regretted” his actions,” adding: “I suffered both professionally and personally [because of] the widespread publicity this issue has generated. I am both embarrassed, humiliated and extremely remorseful.”

The incident was linked to a case taken up by the law firm involving a passenger who sued Columbian airline Avianca after claiming he suffered an injury on a flight to New York City.

Avianca asked the judge to throw the case out, so the passenger’s legal team compiled a brief citing six similar cases in a bid to persuade the judge to let their client’s case proceed. Schwartz found those cases by asking ChatGPT, but he failed to check the authenticity of the results. Avianca’s legal team raised the alarm when it said it couldn’t locate the cases contained in the brief.

In a separate order on Thursday, the judge granted Avianca’s motion to dismiss the suit against it, bringing the whole sorry episode to a close.

ChatGPT and other chatbots like it have gained much attention in recent months due to their ability to converse in a human-like way and skillfully perform a growing range of text-based tasks. But they’re also known to make things up and present it as if it’s real. It’s so prevalent that there’s even a term for it: “hallucinating.”

Those working on the generative AI tools are exploring ways to reduce hallucinations, but until then users are advised to carefully check any “facts” that the chatbots spit out.

Editors' Recommendations

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
Zoom backpedals, says it will no longer use user content to train AI
A woman on a Zoom call.

Like everyone else, Zoom has added AI features to improve its app and videoconferencing service. We all love the ease and speed AI provides, but there are often concerns about the data used to train models, and Zoom has been at the center of the controversy. It's backpedaling now, saying it won't use user content to train its AI models.

News leaked in May 2022 that Zoom was working on emotion-sensing AI that could analyze faces in meetings. Beyond the potential for inaccurate analysis, the results could be discriminatory.

Read more
GPT-4.5 news: Everything we know so far about the next-generation language model
ChatGPT app running on an iPhone.

OpenAI's GPT-4 language model is considered by most to be the most advanced language model used to power modern artificial intelligences (AI). It's used in the ChatGPT chatbot to great effect, and other AIs in similar ways. But that's not the end of its development. As with GPT-3.5, a GPT-4.5 language model may well launch before we see a true next-generation GPT-5.

Here's everything we know about GPT-4.5 so far.

Read more
Newegg wants you to trust ChatGPT for product reviews
AI-generated review on Newegg's website.

Newegg, the online retailer primarily known for selling PC components, has pushed AI into nearly every part of its platform. The latest area to get the AI treatment? Customer reviews.

On select products, Newegg is now showing an AI summary of customer reviews. It sifts through the pile, including the review itself and any listed pros and cons, and uses that to generate its own list of pros and cons, along with its own summary. Currently, Newegg is testing the feature on three products: the Gigabyte RTX 4080 Gaming OC, MSI Katana laptop, and Ipason gaming desktop.

Read more