Skip to main content

Runway’s latest update is already producing mind-blowing results

Vision Pro and Runway being used together.
Cosmo Scharf

People are also having a field day with Runway’s video-to-video generation, which was released on September 13. Essentially, the feature allows you to radically transform the visual style of a given video clip using text prompts.

Check out the video below for a mind-altering example of what’s possible.

Recommended Videos

Runway Gen-3 Alpha just leveled up with Video-to-Video

Now you can transform any video's style using just text prompts at amazing quality.

10 wild examples of what's possible:pic.twitter.com/onh12zCzpI

— Min Choi (@minchoi) September 15, 2024

AI enthusiasts are also producing stunning visual effects that can be displayed on Apple’s Vision Pro headset, giving us a potential hint at what developers leveraging the recently announced API will be able to accomplish.

X (formerly Twitter) user Cristóbal Valenzuela posted a brief clip to the social media site on Monday showing off the combined capabilities of Gen-3 and Apple Vision Pro.

Early experiments rendering Gen-3 on top of the Apple Vision Pro, made by @Nymarius_ pic.twitter.com/SiUNR0vX0G

— Cristóbal Valenzuela (@c_valenzuelab) September 15, 2024

The video depicts an open-plan office space with a generated overlay that makes the room appear to be deep jungle ruins. Some users remained unconvinced of the video’s veracity, but according to the post, it was generated by someone who actually works at Runway.

Twitter user and content creator Cosmo Scharf showed off similar effects in their post, as well as provided additional visual evidence to back up their claims.

Gen-3 Alpha video to video is remarkable!

Here's a test from the Vision Pro.

One day this will run in real-time on mixed reality glasses and your world will never look the same. #VisionHack pic.twitter.com/GTgartg5ry

— Cosmo Scharf ᯅ (@cosmoscharf) September 15, 2024

Runway announced Monday the release of a new API that will enable developers to add video generation capabilities to a variety of devices and apps, though there reportedly are a few restrictions on who can actually access the API. For one, it’s only in limited release for the moment, but you can sign up for a waitlist here. You’ll also need to be either a Build or Enterprise plan subscriber. Once you are granted access, you’ll only be able to leverage the Gen-3 Alpha Turbo model iteration, which is a bit less capable than the company’s flagship Gen-3 Alpha.

The company plans to charge a penny per generation credit to use the service. For context, a single second of video generation costs five credits so, basically, developers will be paying 5 cents per second of video. Devs will also be required to “prominently display” a “Powered by Runway” banner that links back to the company’s website in any interface that calls on the API.

While the commercial video generation space grows increasingly crowded — with Adobe’s Firefly, Meta’s upcoming Sora, Canva’s AI video generator, Kuaishou Technology’s Kling, and Video-01 by Minimax, to name but a handful — Runway is setting itself apart by being one of the first to offer its models as an API. Whether that will be enough to recoup the company’s exorbitant training costs and lead it to profitability remains to be seen.

Please enable Javascript to view this content

Andrew Tarantola
Former Digital Trends Contributor
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
It’s not your imagination — ChatGPT models actually do hallucinate more now
Deep Research option for ChatGPT.

OpenAI released a paper last week detailing various internal tests and findings about its o3 and o4-mini models. The main differences between these newer models and the first versions of ChatGPT we saw in 2023 are their advanced reasoning and multimodal capabilities. o3 and o4-mini can generate images, search the web, automate tasks, remember old conversations, and solve complex problems. However, it seems these improvements have also brought unexpected side effects.

What do the tests say?

Read more
Ray-Ban Meta Glasses are my favorite AI gadget, and they keep getting better
Ray-Ban Meta Glasses worn by Prakhar Khanna.

Meta announced its Ray-Ban AI Glasses in October 2023, and while the company hasn’t launched a successor yet, it has steadily expanded the feature set, turning them into my favorite AI gadget. These are all quality-of-life upgrades that would ideally be released with the next-gen product. But Meta has announced the expansion of Ray-Ban Meta Glasses to more regions and new Meta AI features rolling out starting this week.

I bought a pair of Headliner Meta Ray-Bans in January 2024, and they’ve been my travel companion ever since. It's not because I can record videos while on the go, but because they are the first AI device that doesn’t scream AI. The ambient presence of tech is what makes them special, and they’re only improving, even after 18 months since launch.

Read more
Apple’s low-cost Vision Pro headset could land sooner than expected
A person pinches while wearing an Apple Vision Pro.

Apple’s Vision Pro headset, despite being the most advanced XR gear of its kind, wasn’t quite the roaring success the company may have expected. An asking price worth $3,500 was certainly a deterrent for enthusiasts, but the lack of a full-fledged computing ecosystem built around it was also a lackluster show.

The company has, however, no intention of giving up. On the contrary, Apple is working on a more affordable, watered-down version, and it could arrive sooner than expected. According to Bloomberg, there’s a chance the headset might make an appearance later this year, possibly around the same window as the iPhone 17 series.

Read more