Skip to main content

Lawyer says sorry for fake court citations created by ChatGPT

There has been much talk in recent months about how the new wave of AI-powered chatbots, ChatGPT among them, could upend numerous industries, including the legal profession.

However, judging by what recently happened in a case in New York City, it seems like it could be a while before highly trained lawyers are swept aside by the technology.

Recommended Videos

The bizarre episode began when Roberto Mata sued a Columbian airline after claiming that he suffered an injury on a flight to New York City.

The airline, Avianca, asked the judge to dismiss the case, so Mata’s legal team put together a brief citing half a dozen similar cases that had occurred in an effort to persuade the judge to let their client’s case proceed, the New York Times reported.

The problem was that the airline’s lawyers and the judge were unable to find any evidence of the cases mentioned in the brief. Why? Because ChatGPT had made them all up.

The brief’s creator, Steven A. Schwartz — a highly experienced lawyer in the firm Levidow, Levidow & Oberman — admitted in an affidavit that he’d used OpenAI’s much-celebrated ChatGPT chatbot to search for similar cases, but said that it had “revealed itself to be unreliable.”

Schwartz told the judge he had not used ChatGPT before and “therefore was unaware of the possibility that its content could be false.”

When creating the brief, Schwartz even asked ChatGPT to confirm that the cases really happened. The ever-helpful chatbot replied in the affirmative, saying that information about them could be found on “reputable legal databases.”

The lawyer at the center of the storm said he “greatly regrets” using ChatGPT to create the brief and insisted he would “never do so in the future without absolute verification of its authenticity.”

Looking at what he described as a legal submission full of “bogus judicial decisions, with bogus quotes and bogus internal citations,” and describing the situation as unprecedented, Judge Castel has ordered a hearing for early next month to consider possible penalties.

While impressive in the way they produce flowing text of high quality, ChatGPT and other chatbots like it are also known to make stuff up and present it as if it’s real — something Schwartz has learned to his cost. The phenomenon is known as “hallucinating,” and is one of the biggest challenges facing the human developers behind the chatbots as they seek to iron out this very problematic crease.

In another recent example of a generative AI tool hallucinating, an Australian mayor accused ChatGPT of creating lies about him, including that he was jailed for bribery while working for a bank more than a decade ago.

The mayor, Brian Hood, was actually a whistleblower in the case and was never charged with a crime, so he was rather upset when people began informing him about the chatbot’s rewriting of history.

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
Your politeness toward ChatGPT is increasing OpenAI’s energy costs 
ChatGPT's Advanced Voice Mode on a smartphone.

Everyone’s heard the expression, “Politeness costs nothing,” but with the advent of AI chatbots, it may have to be revised.

Just recently, someone on X wondered how much OpenAI spends on electricity at its data centers to process polite terms like “please” and “thank you” when people engage with its ChatGPT chatbot.

Read more
Meta is training AI on your data. Users say opting out doesn’t work.
Meta AI WhatsApp widget.

Imagine a tech giant telling you that it wants your Instagram and Facebook posts to train its AI models. And that too, without any incentive. You could, however, opt out of it, as per the company. But as you proceed with the official tools to back out and prevent AI from gobbling your social content, they simply don’t work. 

That’s what users of Facebook and Instagram are now reporting. Nate Hake, publisher and founding chief of Travel Lemming, shared that he got an email from Meta about using his social media content for AI training. However, the link to the opt-out form provided by Meta doesn’t work.

Read more
Why writing with ChatGPT actually makes my life harder
ChatGPT prompt bar.

I remember when ChatGPT first appeared, and the first thing everyone started saying was "Writers are done for." People started speculating about news sites, blogs, and pretty much all written internet content becoming AI-generated -- and while those predictions seemed extreme to me, I was also pretty impressed by the text GPT could produce.

Naturally, I had to try out the fancy new tool for myself but I quickly discovered that the results weren't quite as impressive as they seemed. Fast forward more than two years, and as far as my experience and my use cases go, nothing has changed: whenever I use ChatGPT to help with my writing, all it does is slow me down and leave me frustrated.

Read more