Skip to main content

Voice actors seeing an increasing threat from AI

Voice actors are the latest creatives to feel the heat from artificial intelligence (AI).

According to a Motherboard report this week, those providing their voices for content such as ads, gaming titles, and animations have been noticing how clients are increasingly asking them to sign contracts that hand over the rights to their voice so that AI can be used to create a synthetic version.

This, of course, leaves them in a very awkward position, for if they refuse to agree to such a clause, they could well lose the work. Acceptance of the clause, however, would likely result in an AI version of their voice handling future projects.

One voice actor expressed concerns to Motherboard that a client would be able to use the technology to “squeeze more performances out of me” without offering any extra compensation.

Another noted how, at the current time, if they’re in a recording booth and have an issue with a particular line, they can inform the director and find a solution there and then. But the AI technology means that sweeping edits, including the insertion of entirely new sentences, could take place later without the voice actor ever being told.

Contracts that give clients the right to synthesize an actor’s voice are now “very prevalent,” Tim Friedlander, president and founder of the National Association of Voice Actors (NAVA), told Motherboard.

Friedlander said the language in the contracts can be “confusing and ambiguous,” meaning the actor might sign away their rights without even realizing it.

Worryingly, clients are informing some actors that they won’t be considered for a job if they refuse to accept the clause.

The situation is deemed so serious for the industry that NAVA has issued advice for voice actors, telling them never to grant synthesis rights to a client and to contact their union or an attorney if they suspect the contract is trying to take their rights.

“Long story short, any contract that allows a producer to use your voice forever in all known media (and any new media developed in the future) across the universe is one we want to avoid,” NAVA says on its website.

With AI now gaining greater prominence and the technology improving all the time, it’s hard to see how many industries, voice acting among them, will escape its effects.

One cited solution, in this case, could be to build into contracts a licensing system that means an actor will be paid each time a synthesized version of their voice is used, but rates for such usage would almost certainly be low, meaning it’s unlikely to be accepted by those currently able to make a living from voice work.

Editors' Recommendations

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
Microsoft ‘special event’ set for September – Surfaces and AI announcements likely
microsoft

Microsoft has announced it will be holding what it describes as a “special event” in New York City on Thursday, September 21, though at the current time, it’s giving little away on what it’s about.

The expectation is that the tech giant will unveil some new products, though at this point it’s only possible to speculate. In that case, updates to its Surface hardware could certainly be incoming, including for its flagship Surface Laptop Studio, which launched two years ago and is therefore due for a refresh.

Read more
Google Bard could soon become your new AI life coach
Google Bard on a green and black background.

Generative artificial intelligence (AI) tools like ChatGPT have gotten a bad rep recently, but Google is apparently trying to serve up something more positive with its next project: an AI that can offer helpful life advice to people going through tough times.

If a fresh report from The New York Times is to be believed, Google has been testing its AI tech with at least 21 different assignments, including “life advice, ideas, planning instructions and tutoring tips.” The work spans both professional and personal scenarios that users might encounter.

Read more
ChatGPT may soon moderate illegal content on sites like Facebook
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

GPT-4 -- the large language model (LLM) that powers ChatGPT Plus -- may soon take on a new role as an online moderator, policing forums and social networks for nefarious content that shouldn’t see the light of day. That’s according to a new blog post from ChatGPT developer OpenAI, which says this could offer “a more positive vision of the future of digital platforms.”

By enlisting artificial intelligence (AI) instead of human moderators, OpenAI says GPT-4 can enact “much faster iteration on policy changes, reducing the cycle from months to hours.” As well as that, “GPT-4 is also able to interpret rules and nuances in long content policy documentation and adapt instantly to policy updates, resulting in more consistent labeling,” OpenAI claims.

Read more