Skip to main content

The Academy Awards have new film rules. AI is now okay for the Oscars

Robots touching Oscar award.
The Academy / Digital Trends

In 2024, Hollywood was roiled by protests led by the SAG-AFTRA union, fighting for fair rights over their physical and voice identities in the age of AI. A deal was inked late last year to ensure that artists are fairly compensated, but the underlying current was obvious. 

AI in films is here to stay. 

Recommended Videos

If there was any lingering doubt about the future of AI in Hollywood, the Academy of Motion Picture Arts and Sciences has just confirmed it. The body behind the prestigious Oscars honors says it is okay with the usage of generative AI in films. 

“With regard to Generative Artificial Intelligence and other digital tools used in the making of the film, the tools neither help nor harm the chances of achieving a nomination,” the body said. In a nutshell, the final product is what matters.

AI is already mainstream in filmmaking

Now, focus on the word “generative AI,” and not the colloquial term AI. Hollywood has been using AI tools for a while now. Filmmakers are using tools such as Axle AI for tasks like face recognition, scene detection, and transcription. 

Magisto relies on Emotion Sense technology for video editing. Then we have AI software such as Strada AI, which assists with file organization and remote editing. DJI’s AI-powered autofocus system has also been used for tighter focus locking in projects such as Alex Garland’s Civil War

Twelve Labs offers a powerful tool for scene identification, while Luma AI helps with scene rendering in 3D space. These are just software tools that deploy AI for technical tasks. More importantly, these tools are not necessarily generating the fundamental content that defines a film, such as visual scenery and voices. 

Generate with AI, fight for the Oscars?

Generative AI is a subset of AI tools that creates content. Think of a chatbot like Gemini or ChatGPT writing a whole script for you. Google’s Imagen or Midjourney making images from a text prompt. Or next-gen tools such as OpenAI’s Sora or Google’s Veo creating photorealistic or cinematic clips.

That’s where the problems begin. AI-generated videos mean a human artist, or two, lose their job. The same applies to voice generation and dubbing, both of which can now be experienced and generated in an eerily human likeness.

There is already plenty of precedent for that. Marvel got some sweet backlash for using AI visuals in the opening credits of its Secret Invasion TV show. The Runaway AI engine was deployed in the blockbuster that was Everything Everywhere All at Once.

But just how far can the input of generative AI go before it is flagged or disqualified from the Oscars race? Well, there is no hard rule on it, and the language used by the Academy is also pretty vague.

How far is too far for the Oscars?

“The Academy and each branch will judge the achievement, taking into account the degree to which a human was at the heart of the creative authorship when choosing which movie to award,” says the institution. 

In a nutshell, it’s up to the human voters to decide the artistic merit of a film. That would also mean personal biases about the role of generative AI content in a film will also seep into their voting process. 

But hey, multiple Academy Award winner James Cameron now serves on the board of directors of artificial intelligence (AI) firm, StabilityAI. One of the biggest names in the generative AI race, the company is also at the center of blockbuster copyright lawsuits against Getty and human artists. 

But the AI juggernaut in the entertainment industry is not stopping. The use of generative AI tools has also increased in the games industry after the landmark union protests last year, and the likes of Microsoft are even making tools to put AI assets into games.

Would you want to play an AI-created game? That’s up for debate. Should the Academy keep the art of film-making pristine from a tool that is notorious for unfair and unethical usage of human content for training? That fate has been sealed, it seems. 

Nadeem Sarwar
Nadeem is a tech and science journalist who started reading about cool smartphone tech out of curiosity and soon started…
Google’s new Gemini 2.0 AI model is about to be everywhere
Gemini 2.0 logo

Less than a year after debuting Gemini 1.5, Google's DeepMind division was back Wednesday to reveal the AI's next-generation model, Gemini 2.0. The new model offers native image and audio output, and "will enable us to build new AI agents that bring us closer to our vision of a universal assistant," the company wrote in its announcement blog post.

As of Wednesday, Gemini 2.0 is available at all subscription tiers, including free. As Google's new flagship AI model, you can expect to see it begin powering AI features across the company's ecosystem in the coming months. As with OpenAI's o1 model, the initial release of Gemini 2.0 is not the company's full-fledged version, but rather a smaller, less capable "experimental preview" iteration that will be upgraded in Google Gemini in the coming months.

Read more
Is AI already plateauing? New reporting suggests GPT-5 may be in trouble
A person sits in front of a laptop. On the laptop screen is the home page for OpenAI's ChatGPT artificial intelligence chatbot.

OpenAI's next-generation Orion model of ChatGPT, which is both rumored and denied to be arriving by the end of the year, may not be all it's been hyped to be once it arrives, according to a new report from The Information.

Citing anonymous OpenAI employees, the report claims the Orion model has shown a "far smaller" improvement over its GPT-4 predecessor than GPT-4 showed over GPT-3. Those sources also note that Orion "isn’t reliably better than its predecessor [GPT-4] in handling certain tasks," specifically coding applications, though the new model is notably stronger at general language capabilities, such as summarizing documents or generating emails.

Read more
Google’s AI detection tool is now available for anyone to try
Gemini running on the Google Pixel 9 Pro Fold.

Google announced via a post on X (formerly Twitter) on Wednesday that SynthID is now available to anybody who wants to try it. The authentication system for AI-generated content embeds imperceptible watermarks into generated images, video, and text, enabling users to verify whether a piece of content was made by humans or machines.

“We’re open-sourcing our SynthID Text watermarking tool,” the company wrote. “Available freely to developers and businesses, it will help them identify their AI-generated content.”

Read more