Skip to main content

Chatbot-generated Schumacher ‘interview’ leads to editor’s dismissal

A magazine editor has learned the hard way about the ethical limits of using generative AI after she was fired for running an “interview” with F1 motor racing legend Michael Schumacher using quotations that were actually from a chatbot.

Seven-time F1 world champion Schumacher has been out of the public eye since 2013 when he sustained severe head injuries in a skiing accident during a vacation in France.

The German tabloid magazine, Die Aktuelle, showcased the article on a recent front page with a photo of the former motor racing champion and the headline: “Michael Schumacher, The First Interview, World Sensation,” together with a much smaller strapline saying: “It sounds deceptively real.”

It emerged in the article that the quotations had been generated by Character.ai, an AI chatbot similar to OpenAI’s ChatGPT and Google’s Bard, which have gained much attention in recent months for their versatility and their impressive ability to converse in a human-like way.

In Die Aktuelle’s “interview,” Schumacher, or in fact the chatbot, talked about his family life and health.

“My wife and my children were a blessing to me and without them I would not have managed it,” the chatbot, speaking as Schumacher, said. “Naturally they are also very sad, how it has all happened.”

Schumacher’s family intends to take legal action against the publication, according to a BBC report.

The magazine’s publisher, Funke, has apologized for running the article.

“Funke apologizes to the Schumacher family for reporting on Michael Schumacher in the latest issue of Die Aktuelle,” it said in a statement.

“As a result of the publication of this article … Die Aktuelle editor-in-chief Anne Hoffmann, who has been responsible for journalism for the newspaper since 2009, will be relieved of her duties as of today.”

Bianca Pohlmann, managing director of Funke magazines, said in the statement: “This tasteless and misleading article should never have appeared. It in no way corresponds to the standards of journalism that we — and our readers — expect from a publisher like Funke.”

Character.ai, launched in September last year, lets you “chat” with celebrities, historical figures, and fictional characters, or even ones you created.

That may be fine in the privacy of your own home, but taking it a step further and publishing an article based on the chatbot’s responses is clearly a huge risk.

As generative AI continues to improve and edge ever more into our lives, more missteps like this are to be expected, though hopefully, Die Aktuelle’s blunder may prompt publishers to think twice about how they utilize content created by a chatbot.

Editors' Recommendations

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
Microsoft’s Copilot AI will have an ‘energy,’ apparently
The Microsoft Windows logo surrounded by colors of red, green, yellow and blue.

Microsoft has just unveiled the latest version of Windows 11, and it features updates across the operating system, from AI to new tools and features.

Among the updates are changes to Microsoft’s Copilot AI tool, which will have more features to help users in apps like Word and Excel, as well as within Windows 11 itself. Copilot can be used to summarize meetings, write emails, help with analysis, and much more.

Read more
Most people distrust AI and want regulation, says new survey
A person's hand holding a smartphone. The smartphone is showing the website for the ChatGPT generative AI.

Most American adults do not trust artificial intelligence (AI) tools like ChatGPT and worry about their potential misuse, a new survey has found. It suggests that the frequent scandals surrounding AI-created malware and disinformation are taking their toll and that the public might be increasingly receptive to ideas of AI regulation.

The survey from the MITRE Corporation and the Harris Poll claims that just 39% of 2,063 U.S. adults polled believe that today’s AI tech is “safe and secure,” a drop of 9% from when the two firms conducted their last survey in November 2022.

Read more
Microsoft accidentally released 38TB of private data in a major leak
A large monitor displaying a security hacking breach warning.

It’s just been revealed that Microsoft researchers accidentally leaked 38TB of confidential information onto the company’s GitHub page, where potentially anyone could see it. Among the data trove was a backup of two former employees’ workstations, which contained keys, passwords, secrets, and more than 30,000 private Teams messages.

According to cloud security firm Wiz, the leak was published on Microsoft’s artificial intelligence (AI) GitHub repository and was accidentally included in a tranche of open-source training data. That means visitors were encouraged to download it, meaning it could have fallen into the wrong hands again and again.

Read more