Skip to main content

AI enables slain man to address courtroom at killer’s sentencing

An abstract image of a person and a blue background.
Tara Winstead/Pexels

In what’s believed to be a world first, artificial intelligence (AI) has allowed a slain man to address his killer at the sentencing hearing.

Christopher Pelkey was shot dead in a road rage incident in Chandler, Arizona, four years ago, but just recently, AI was used to recreate a digital version of the victim that was allowed to make a statement during the trial, a local news site reported.

Recommended Videos

The video presentation also included real clips of Pelkey to give those in court a clearer understanding of his personality. Some of these clips were also used to create the AI-generated likeness of Pelkey, which you can see below.

Chris Pelkey died in Nov. 2021 in a road rage shooting.

Recently, Chris’ family created an AI-generated video of him giving his own victim impact statement.

Here is a clip – watch the full story tonight ONLY on @FOX10Phoenix 📺 pic.twitter.com/JIz6bKuNfU

— Nicole Krasean (@NicoleK_Fox10) May 5, 2025

In the video played in court, the AI version of Pelkey says: “To Gabriel Horcasitas, the man who shot me — it is a shame we encountered each other that day in those circumstances. In another life, we probably could’ve been friends.”

He continues: “I believe in forgiveness and in God who forgives. I always have and I still do.”

After watching the video, Judge Todd Lang said: “I love that AI. Thank you for that. I felt like that was genuine, that his obvious forgiveness of Mr. Horcasitas reflects the character I heard about today.” 

This week, the judge sentenced Horcasitas to ten-and-a-half years for Pelkey’s manslaughter.

It was Chris Pelkey’s sister, Stacey, who came up with the idea to use AI to create a likeness of her brother for use in court. She said it was important “not to make Chris say what I was feeling, and to detach and let him speak because he said things that would never come out of my mouth, but that I know would come out of his.”

Ann A. Scott Timmer, Chief Justice of the Arizona Supreme Court, commented that AI has the potential “to create great efficiencies in the justice system and may assist those unschooled in the law to better present their positions. For that reason, we are excited about AI’s potential.”

Timmer added: “But AI can also hinder or even upend justice if inappropriately used. A measured approach is best. Along those lines, the court has formed an AI committee to examine AI use and make recommendations for how best to use it … Those who use AI — including courts — are responsible for its accuracy.”

Indeed, while the use of AI in this way brings a powerful and deeply personal element to court proceedings, it also raises various ethical and legal concerns about authenticity, emotional influence, and appropriate application. As a result, it seems likely that other courts will at some point develop guidelines for future cases, if they choose to allow AI-generated victim statements.

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
Apple is hoping your emails will fix its misfiring AI
Categories in Apple Mail app.

Apple’s AI efforts haven’t made the same kind of impact as Google’s Gemini, Microsoft Copilot, or OpenAI’s ChatGPT. The company’s AI stack, dubbed Apple Intelligence, hasn’t moved the functional needle for iPhone and Mac users, even triggering an internal management crisis at the company. 

It seems user data could rescue the sinking ship. Earlier today, the company published a Machine Learning research paper that details a new approach to train its onboard AI using data stored on your iPhone, starting with emails. These emails will be used to improve features such as email summarization and Writing Tools. 

Read more
‘AI-powered’ shopping app alleged to have been human-powered
A smartphone with "shop now" on the display.

You may have occasionally joked about how companies these days seem to be falling over themselves to launch something, anything, that has AI, even just a little bit, somewhere under the hood. That way they can run dazzling ad campaigns that make the product sound like it’s at the cutting-edge, powered by this new-fangled technology that everyone’s talking about.

But one tech founder, Albert Saniger, is now in hot water after being charged with making false claims about his company’s technology after it was found that his "AI-infused" universal shopping app was actually powered by a bunch of people in a Philippines call center.

Read more
DeepSeek readies the next AI disruption with self-improving models
DeepSeek AI chatbot running on an iPhone.

Barely a few months ago, Wall Street’s big bet on generative AI had a moment of reckoning when DeepSeek arrived on the scene. Despite its heavily censored nature, the open source DeepSeek proved that a frontier reasoning AI model doesn’t necessarily require billions of dollars and can be pulled off on modest resources.

It quickly found commercial adoption by giants such as Huawei, Oppo, and Vivo, while the likes of Microsoft, Alibaba, and Tencent quickly gave it a spot on their platforms. Now, the buzzy Chinese company’s next target is self-improving AI models that use a looping judge-reward approach to improve themselves.

Read more