We’re not even two weeks into 2024 and we’ve already got our first big gaming industry trend of the year: the rise of generative AI.
This week has seen a tidal wave of news, as “AI” was the big buzzword coming out of this year’s CES showcase. Nvidia announced that it had landed some major partners for its Ace microservice, which is capable of creating fully voiced AI characters in games. The Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) actors’ union then cut an eyebrow-raising deal with Replica Studios that would allow the service to generate voices based on consenting actors. Other moments were more bizarre, like when an AI-generated Mario appeared on a screen on the show floor and told attendees how to buy video games from Target.
Rather than being met with the kind of bright-eyed wonder you get from a tech show like CES, announcements like this are sparking confusion and outrage. Few of these announcements went off without some sort of PR crisis that required clarifying statements from the companies involved. If generative AI really is the future of video games, CES made it clear that companies need to get all their ducks in a row before rolling it out.
AI got got off on the wrong foot at CES due to a critical misstep from Nvidia. The company used the show to unveil an expansion of its Ace microservice, a powerful suite of tools capable of generating AI non-playable characters (NPCs). An impressive tech demo showed a player talking to a computer-created character in a bar and getting organic responses to their spoken prompts. It’s the kind of demo that should have gotten people talking. And it did … for all the wrong reasons.
When Digital Trends saw the demo ahead of the show, we asked Nvidia for an explanation on what data set its tools are trained on. Rather than going into detail, Nvidia noted that there was “no simple answer” due to how many data sets different parts of its pipeline were built on. That ambiguous answer would start a firestorm when it hit social media, with concerned players speculating that the tool was trained on copywritten material that Nvidia doesn’t own.
After nearly two days of backlash, Nvidia would reach out to Digital Trends with the simple answer it originally declined to give. It now maintains that Ace is trained entirely on data it has the rights to, making it “commercially safe” (we reached out to Convai, a company behind part of Ace’s tech pipeline, about its own data usage, but it did not respond in time for this story). That explanation might alleviate some concerns about “stolen” training data, but it’s not the only issue players have an issue with. While some have criticized the data ambiguity, others have voiced more general concerns about how that tech could put real designers and actors out of work. The logistical answer was simple in the end; the ethical one isn’t.
That would become a running theme at the show. Perhaps the biggest controversy came when SAG-AFTRA announced an agreement with Replica Studios, an AI platform that creates digital voices. The actors’ guild put out a statement saying that it had penned a deal that would allow Replica to train on data from consenting members of the guild. SAG President Fran Drescher called the tool “a great example of AI being done right,” while the guild noted that the agreement had been approved by affected members of the guild.
That last part is up for debate. Several members of the guild expressed confusion about the deal on social media. “Excuse me? With all due respect … you state in the article, “approved by affected members of the union’s voiceover performer community,” tweeted voice acting icon Steve Blum. “Nobody in our community approved this that I know of. Games are the bulk of my livelihood and have been for years. Who are you referring to?”
Excuse me? With all due respect…you state in the article “Approved by affected members of the union’s voiceover performer community.” Nobody in our community approved this that I know of. Games are the bulk of my livelihood and have been for years. Who are you referring to?
— Steve Blum (@blumspew) January 9, 2024
Digital Trends reached out to Replica Studios to get clarification on what training data it uses and whether or not it was using copywritten material prior to its deal with SAG-AFTRA. We did not receive a response at the time of publishing and will update this article when we hear back.
Regardless of whether or not the deal is ethical, Blum’s sentiment was echoed across social media following the news, making it clear that the guild’s members weren’t on the same page as SAG implied. It was another moment that lent to the overall Wild West feeling on display at CES this year.
While those two stories dominated the news cycle coming out of CES, they weren’t the only AI gaming announcements on tap — and the rest of the crop was much weirder. When MSI demoed its MEG 321URX QD-OLED monitor at the show, it came with a head-scratching feature. According to Tom’s Guide, the display uses an AI accelerator that analyzes the League of Legends‘ mini-map and marks where enemies are. While there may be some positive accessibility benefits that come with it, it’s hard not to see MSI’s monitor as a sanctioned way to cheat in a very competitive game. That seems like an esports disaster waiting to happen.
“Disaster” was the running theme of the show when it came to AI. Sometimes those came in harmless blunders. The funniest moment of the show came when an AI hologram of Mario with an unsettling voice appeared on the show floor. Attendees grabbed videos of themselves talking to the digital plumber, who explained how to buy games at Target. Even weirder was that the tech was attached to a screen sponsored by AARP. Proto Hologram, the company behind the AI, and AARP would later clarify that Nintendo was not involved in the “inadvertent” proof of concept showing.
So Mario was at #CES
But uh… who approved this abomination? 💀 pic.twitter.com/diG3axCJIG
— Greggory (@ProbChild_) January 10, 2024
That somehow wasn’t the weirdest AI moment, either. That honor would go to AI Shark, a revitalized version of the classic GameShark cheat tool rebuilt as AI software. The company would quietly announce its new initiative, and a partnership with audio company Altec Lansing, at the show with some eye-catching marketing. A press release would claim that “the official launch is planned to coincide with the Nintendo Switch 2 in September 2024.” A Switch successor has not yet been confirmed by Nintendo.
AI Shark would backtrack on that claim throughout the day, first dialing Nintendo’s forecast to fall 2024 before saying Nintendo hadn’t confirmed the release plan at all. It was a bizarre move with three plausible explanations: AI Shark accidentally leaked confidential plans, it was simply guessing based on rumors, or it entirely made the fact up to get eyes on its product with a juicy news story.
No matter which version of that is true, it’s a chaotic move that perfectly encapsulates AI’s entire presence at CES. The video game industry was a disorganized mess at this year’s show. AI news was doled out in rapid succession over the multiday trade show, but in a hasty manner that left companies looking overeager to jump on a bandwagon they might not be prepared to ride yet. It was difficult to separate what was a legitimate product using licensed data from a scam impersonating reputable companies. You couldn’t ask for a more fitting metaphor as to why people are so concerned about AI ethics than hologram Mario urging people to shop at Target.
At a certain point, it doesn’t matter if a company like Replica Studios has consent to replicate human voices. The existential worry is that we’re building a future devoid of humanity, where product trumps art. If you’re going to fully automate an NPC rather than have an artist handcraft it with narrative intent, then why put it in the game at all? Something like Nvidia Ace reads less as a helpful tool for developers and more as a way to fatten games up with meaningless content that doesn’t serve a purpose behind hollow engagement. The fact that companies like AI Shark and Proto seem ready to mislead players to make that happen makes it all the more troubling.
If CES offered a glimpse at where generative AI is taking us next, you can drop me off at the next exit.
- The father of Nvidia’s controversial AI gaming tech wants to set the record straight
- Nvidia clarifies how it is training its generative game AI following data concerns
- CES 2024: HyperX has a new line of gaming products just for kids
- All the gaming news you should (and shouldn’t) expect from CES 2024
- Nvidia is bringing ChatGPT-style AI to video games, and I’m already worried