Skip to main content

This is what happens when A.I. tries to reimagine Stanley Kubrick’s films

Thanks to his classic sci-fi movie 2001: A Space Odyssey, filmmaker Stanley Kubrick helped introduce the general public to the topic of artificial intelligence. Almost 50 years on, that movie’s HAL 9000 character continues to be one of the most enduring representations of A.I. in entertainment — and has helped inform everything from the design of smart assistants like Siri and Google Assistant to debates about the perils of machine intelligence. But what would modern-day A.I. make of Kubrick’s work?

That slightly offbeat premise is the basis of an intriguing project — called Neural Kubrick — from researchers at the U.K.’s Interactive Architecture Lab. The idea behind the project is to look at how artificial intelligence can impact filmmaking, an issue which speaks to a larger question about whether or not A.I. can be considered creative.

Recommended Videos

The exhibition created by researchers Anirudhan Iyengar, Ioulia Marouda, and Hesham Hattab, involves a multi-screen installation and deep neural networks which reinterpret scenes from 2001 and two other celebrated Kubrick movies: A Clockwork Orange and The Shining.

“Three machine learning algorithms take up the most significant roles in [our] A.I. film crew — that of art director, film editor, and director of photography,” Iyengar told Digital Trends. “There is a Generative Adversarial Network (GAN) that reimagines new cinematic compositions, based on the features it interprets from the input dataset of movie frames. There is a Convolutional Neural Network (CNN) that classifies visual similarities between inputted scenes and a dataset of hundreds of different movies, used to mimic the kind of decision making a film editor makes. And there is a Recurrent Neural Network (RNN), that analyzes the camera path coordinates of a cinematic sequence, and generates new camera paths to reshoot the original input sequence in virtual space — mimicking the role of a director of photography.”

The results of the Neural Kubrick experiment can be seen by checking out the website. It’s conceptual stuff, but it’s interesting because of the questions it poses about A.I. For instance, who is the author of a piece of work designed by an A.I.: The algorithm or its original programmer? Does any trace of Kubrick’s (very human) mastery of cinema remain when you’re trying to train a machine to replicate some of his decisions?

“It was intriguing for us to compare what meaning the machine makes of the given scene when all it interprets is features, patterns, zeroes, and ones,” Marouda told us.

The scenes generated by Neural Kubrick aren’t exactly entertaining in the classic sense, but they’re definitely interesting. At the very least, it’s difficult to imagine that Kubrick — a filmmaker known for pushing the technological limits of filmmaking — wouldn’t have been intrigued by the results!

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
The funny formula: Why machine-generated humor is the holy grail of A.I.
microphone in a bar

In "The Outrageous Okona," the fourth episode of the second season of Star Trek: The Next Generation, the Enterprise's resident android Data attempts to learn the one skill it has previously been unable to master: Humor. Visiting the ship’s Holodeck, Data takes lessons from a holographic comedian to try and understand the business of making funny.

While the worlds of Star Trek and the real world can be far apart at times, this plotline rings true for machine intelligence here on Earth. Put simply, getting an A.I. to understand humor and then to generate its own jokes turns out to be extraordinarily tough.

Read more
Nvidia’s latest A.I. results prove that ARM is ready for the data center
Jensen Huang at GTX 2020.

Nvidia just published its latest MLPerf benchmark results, and they have are some big implications for the future of computing. In addition to maintaining a lead over other A.I. hardware -- which Nvidia has claimed for the last three batches of results -- the company showcased the power of ARM-based systems in the data center, with results nearly matching traditional x86 systems.

In the six tests MLPerf includes, ARM-based systems came within a few percentage points of x86 systems, with both using Nvidia A100 A.I. graphics cards. In one of the tests, the ARM-based system actually beat the x86 one, showcasing the advancements made in deploying different instruction sets in A.I. applications.

Read more
Nvidia lowers the barrier to entry into A.I. with Fleet Command and LaunchPad
laptop running Nvidia Fleet Command software.

Nvidia is expanding its artificial intelligence (A.I.) offerings as part of its continued effort to "democratize A.I." The company announced two new programs today that can help businesses of any size to train and deploy A.I. models without investing in infrastructure. The first is A.I. LaunchPad, which gives enterprises access to a stack of A.I. infrastructure and software, and the second is Fleet Command, which helps businesses deploy and manage the A.I. models they've trained.

At Computex 2021, Nvidia announced the Base Command platform that allows businesses to train A.I. models on Nvidia's DGX SuperPod supercomputer.  Fleet Command builds on this platform by allowing users to simulate A.I. models and deploy them across edge devices remotely. With an Nvidia-certified system, admins can now control the entire life cycle of A.I. training and edge deployment without the upfront cost.

Read more