Skip to main content

OpenAI’s latest Sora video shows an elephant made of leaves

OpenAI left a lot of jaws on the floor last month when it shared the first footage made by Sora, its AI-powered text-to-video generator.

While not perfect, the quality was extraordinary and left many wondering about the kind of transformational impact that such technology will have on the creative industries, including Hollywood.

OpenAI has yet to release Sora to the public — that’s expected to happen later this year — but the company is happy to continue impressing everyone by dropping regular Sora-generated videos onto its social media feeds.

The latest one to land looks like a clip from a fantasy movie and was generated from the simple text prompt: “An elephant made of leaves running in the jungle.”

OpenAI keeps dropping more insane Sora videos

These are 100% AI generated

9 reality bending videos

1. Elephant made out of leaves pic.twitter.com/tPsHNGbFPS

— Linus ●ᴗ● Ekenstam (@LinusEkenstam) March 18, 2024

OpenAI said that Sora did all the work and that the video output was not modified in any way.

Of course, any video creation tool worth its salt has to be adept at cat videos. Sora passed with flying colors when fed with the text prompt: “An adorable kitten pirate riding a robot vacuum around the house.”

Asked to create “Niagara Falls with colorful paint,” Sora came up with this extraordinary footage.

And check out this amazing clip prompted by: “POV video of a bee as it dives through a beautiful field of flowers.”

The OpenAI team dropped more wild Sora videos.

100% AI (minus sound)🤯

9 new ones:

1. POV of Bee pic.twitter.com/RjjSm6kcEB

— Min Choi (@minchoi) March 14, 2024

Sora can build videos up to a minute long “while maintaining visual quality and adherence to the user’s prompt,” OpenAI said when it unveiled the tool last month. The Microsoft-backed startup, which created a stir last year with its AI-powered ChatGPT chatbot, said it’s decided to share its research progress with Sora early “to learn from feedback and give the public a sense of what Al capabilities are on the horizon.”

It also said that it used publicly available data and licensed data to train Sora. The issue of how generative AI models are trained is a controversial one, with writers and artists demanding compensation in instances where their work is used by AI companies such as OpenAI. A number of lawsuits brought by creators are already working their way through the courts, prompting AI firms to seek licensing deals with media giants for trouble-free AI training.

Editors' Recommendations

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
OpenAI is on fire — here’s what that means for ChatGPT and Windows
OpenAI CEO Sam Altman standing on stage at a product event.

OpenAI kicked off a firestorm over the weekend. The creator of ChatGPT and DALL-E 3 ousted CEO Sam Altman on Friday, kicking off a weekend of shenanigans that led to three CEOs in three days, as well as what some are calling an under-the-table acquisition of OpenAI by Microsoft.

A lot happened at the tech world's hottest commodity in just a few days, and depending on how everything plays out, it could have major implications for the future of products like ChatGPT. We're here to explain how OpenAI got here, what the situation is now, and where the company could be going from here.

Read more
New ‘poisoning’ tool spells trouble for AI text-to-image tech
Profile of head on computer chip artificial intelligence.

Professional artists and photographers annoyed at generative AI firms using their work to train their technology may soon have an effective way to respond that doesn't involve going to the courts.

Generative AI burst onto the scene with the launch of OpenAI’s ChatGPT chatbot almost a year ago. The tool is extremely adept at conversing in a very natural, human-like way, but to gain that ability it had to be trained on masses of data scraped from the web.

Read more
OpenAI’s new tool can spot fake AI images, but there’s a catch
OpenAI Dall-E 3 alpha test version image.

Images generated by artificial intelligence (AI) have been causing plenty of consternation in recent months, with people understandably worried that they could be used to spread misinformation and deceive the public. Now, ChatGPT maker OpenAI is apparently working on a tool that can detect AI-generated images with 99% accuracy.

According to Bloomberg, OpenAI’s tool is designed to root out user-made pictures created by its own Dall-E 3 image generator. Speaking at the Wall Street Journal’s Tech Live event, Mira Murati, chief technology officer at OpenAI, claimed the tool is “99% reliable.” While the tech is being tested internally, there’s no release date yet.

Read more