Skip to main content

Truly creative A.I. is just around the corner. Here’s why that’s a big deal

Joe Kennedy, father of the late President John F. Kennedy, once said that, when shoeshine boys start giving you stock tips, the financial bubble is getting too big for its own good.

By that same logic, when Hollywood actors start tweeting about a once-obscure part of artificial intelligence (A.I.), you know that something big is happening, too. That’s exactly what occurred recently when Zach Braff, the actor-director still best known for his performance as J.D. on the medical comedy series Scrubs, recorded himself reading a Scrubs-style monolog written by an A.I.

“What is a hospital?” Braff reads, adopting the thoughtful tone J.D. used to wrap up each episode in the series. “A hospital is a lot like a high school: the most amazing man is dying, and you’re the only one who wants to steal stuff from his dad. Being in a hospital is a lot like being in a sorority. You have greasers and surgeons. And even though it sucks about Doctor Tapioca, not even that’s sad.”

Today’s machine creativity typically involves humans making some of the decisions

Yes, it’s nonsense — but it’s charming nonsense. Created by Botnik Studios, who recently used the same same statistical predictive tools to write an equally bonkers new Harry Potter story, the A.I. mimics the writing style of the show’s real scripts. It sounds right enough to be recognizable but wrong enough to be obviously the work of a silly machine, like the classic anecdote about the early MIT machine translation software which translated the Biblical saying “The spirit is willing, but the flesh is weak” into Russian and back again, ending up with “The whisky is strong, but the meat is rotten.”

As Braff’s publicizing of the Scrubs-bot shows, the topic of computational creativity is very much in right now. Once the domain of a few lonely researchers, trapped on the fringes of computer science and the liberal arts, the question of whether a machine can be creative is everywhere. Alongside Botnik’s attempts at Harry Potter and Scrubs, we’ve recently written about a recurrent neural network (RNN) that took a stab at writing the sixth novel in the Song of Ice and Fire series, better known to TV fans as Game of Thrones. The RNN was trained for its task by reading and analyzing the roughly 5,000 pages of existing novels in the series.

Larger companies like Google have gotten in on the act, too, with its Deep Dream project, which purposely magnifies some of the recognition errors in Google’s deep learning neural networks to create wonderfully trippy effects.

Pouff - Grocery Trip

Right now, we’re at the “laughter” stage of computational creativity for the most part. That doesn’t have to mean outright mocking A.I.’s attempts to create, but it’s extremely unlikely that, say, an image generated by Google’s Deep Dream will hang in an art gallery any time soon — even if the same image painted by a person may be taken more seriously.

It’s fair to point out that today’s machine creativity typically involves humans making some of the decisions, but the credit isn’t split between both in the same way that a movie written by two authors would be. Rightly or wrongly, we give A.I. the same amount of credit in these scenarios that we might give to the typewriter that “War and Peace” was written on. In other words, very little.

Right now, we’re in the “laughter” stage of AI creativity, but that may change soon. 

But that could change very soon. Because computational creativity is doing a whole lot more than generating funny memes and writing parody scripts. NASA, for example, has employed evolutionary algorithms, which mimic natural selection in machine form, to design satellite components. These components work well — although their human “creators” are at a loss to explain exactly how.

Legal firms, meanwhile, are using A.I. to formulate and hone new arguments and interpretations of the law, which could be useful in a courtroom. In medicine, the U.K.’s University of Manchester is using a robot called EVE to formulate hypotheses for future drugs, devise experiments to test these theories, physically carry out these experiments, and then interpret the results.

IBM’s “Chef Watson” utilizes A.I. to generate its own unique cooking recipes, based on a knowledge of 9,000 existing dishes and an awareness of which chemical compounds work well together. The results are things like Turkish-Korean Caesar salads and Cuban lobster bouillabaisse that no human chef would ever come up with, but which taste good nevertheless.

In another domain, video game developers Epic Stars recently used a deep learning A.I. to compose the main theme for its new game Pixelfield, which was then performed by a live orchestra.

Making of "Battle Royale" - The World's first AI-composed score for a video game

Finally, newspapers like the Washington Post are eschewing sending human reporters to cover events like the Olympics, in place of letting machines do the job. To date, the newspaper’s robo-journalist has written close to 1,000 articles.

Which brings us to our big point: Should a machine’s ability to be creative serve as the ultimate benchmark for machine intelligence? Here in 2017, brain-inspired neural networks are getting bigger, better, and more complicated all the time, but we still don’t have an obvious test to discern when a machine is finally considered intelligent.

We still don’t have definitive method for discerning when a machine is intelligent.

While it’s not a serious concern of most A.I. researchers, the most famous test of machine intelligence remains the Turing Test, which suggests that if a machine is able to fool us into thinking it’s intelligent, we must therefore agree that it is intelligent. The result, unfortunately, is that machine intelligence is reduced to the level of an illusionist’s trick — attempting to pull the wool over the audience’s eyes rather than actually demonstrating that a computer can have a mind.

An alternative approach is an idea called the Lovelace Test, named after the pioneering computer programmer Ada Lovelace. Appropriately enough, Ada Lovelace represented the intersection of creativity and computation — being the daughter of the Romantic poet Lord Byron, as well as working alongside Charles Babbage on his ill-fated Analytical Engine in the 1800s. Ada Lovelace was impressed by the idea of building the Analytical Engine, but argued that it would never be considered capable of true thinking, since it was only able to carry out pre-programmed instructions. As she said, “The Analytical Engine has no pretensions whatever to originate anything,’ she famously wrote. ‘It can do [only] whatever we know how to order it to perform.”

The broad idea of the Lovelace Test involves three separate parts: the human creator, the machine component, and the original idea. The test is passed only if the machine component is able to generate an original idea, without the human creator being able to explain exactly how this has been achieved. At that point, it is assumed that a computer has come up with a spontaneous creative thought. Mark Riedl, an associate professor of interactive computing at Georgia Tech, has proposed a modification of the test in which certain constraints are given — such as “create a story in which a boy falls in love with a girl, aliens abduct the boy, and the girl saves the world with the help of a talking cat.”

“Where I think the Lovelace 2.0 test plays a role is verifying that novel creation by a computational system is not accidental,” Riedl told Digital Trends. “The test requires understanding of what is being asked, and understanding of the semantics of the data it is drawing from.”

It’s an intriguing thought experiment. This benchmark may be one that artificial intelligence has not yet cracked, but surely it’s getting closer all the time. When machines can create patentable technologies, dream up useful hypotheses, and potentially one day write movie scripts that will sell tickets to paying audiences, it’s difficult to call their insights accidental.

To coin a phrase often attributed to Mahatma Gandhi, “First they ignore you, then they laugh at you, then they fight you, then you win.” Computational creativity has been ignored. Right now, either fondly or maliciously, it is being laughed at. Next it will start fighting our preconceptions — such as the kinds of jobs which qualify as creative, which are the roles we are frequently assured are safe from automation.

And after that? Just maybe it can win.

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Zoox recalls robotaxis after Las Vegas crash, citing software fix
zoox recall crash 1739252352 robotaxi side profile in dark mode

Amazon's self-driving vehicle unit, Zoox, has issued a voluntary safety recall after one of its autonomous vehicles was involved in a minor collision in Las Vegas. The incident, which occurred in April 2025, led the company to investigate and identify a software issue affecting how the robotaxi anticipates another vehicle’s path.
The recall, affecting 270 Zoox-built vehicles, was formally filed with the National Highway Traffic Safety Administration (NHTSA). Zoox said the issue has already been addressed through a software update that was remotely deployed to its fleet.
Zoox’s robotaxis, which operate without driving controls like a steering wheel or pedals, are part of Amazon’s entry into the autonomous driving space. According to Zoox’s safety recall report, the vehicle failed to yield to oncoming traffic while making an unprotected left turn, leading to a low-speed collision with a regular passenger car. While damage was minor, the event raised flags about the system’s behavior in complex urban scenarios.
Establishing safety and reliability remain key factors in the deployment of the relatively new autonomous ride-hailing technology. Alphabet-owned Waymo continues to lead the sector in both safety and operational scale, with services active in multiple cities including Phoenix and San Francisco. But GM’s Cruise and Ford/VW-backed Argo AI were forced to abandon operations over the past few years.
Tesla is also expected to enter the robotaxi race with the launch of its own service in June 2025, leveraging its Full Self-Driving (FSD) software. While FSD has faced heavy regulatory scrutiny through last year, safety regulations are expected to loosen under the Trump administration.
Zoox, which Amazon acquired in 2020, says it issued the recall voluntarily as part of its commitment to safety. “It’s essential that we remain transparent about our processes and the collective decisions we make,” the company said in a statement.

Read more
Mitsubishi’s back in the EV game—with a new electric SUV coming in 2026
mitsubishi bev 2026 momentum 2030 line up

Mitsubishi is officially jumping back into the U.S. electric vehicle scene—and this time, it’s not just dipping a toe. The company confirmed it will launch a brand-new battery-electric SUV in North America starting in summer 2026, marking its first fully electric model here since the quirky little i-MiEV left the stage back in 2017.
The new EV will be a compact crossover, and while Mitsubishi is keeping most of the juicy details under wraps, we do know it’ll be based on the same next-gen platform as the upcoming Nissan Leaf. That means it’ll ride on the CMF-EV architecture—the same one underpinning the Nissan Ariya—which supports ranges of up to 300+ miles. So yeah, this won’t be your average entry-level EV.
Designed in partnership with Nissan, the new model will be built in Japan and shipped over to U.S. shores. No word yet on pricing, battery size, or even a name, but Mitsubishi has made it clear this EV is just the beginning. As part of its “Momentum 2030” plan, the company promises a new or updated vehicle every year through the end of the decade, with four electric models rolling out by 2028. And yes, one of those might even be a pickup.
Mitsubishi says the goal is to give customers “flexible powertrain options,” which is marketing speak for: “We’ll have something for everyone.” So whether you're all-in on electric or still into gas or hybrid power, they're aiming to have you covered.
This mystery EV will eventually sit alongside Mitsubishi’s current U.S. lineup—the Outlander, Outlander PHEV, Eclipse Cross, and Outlander Sport—and help the brand move beyond its current under-the-radar status in the electric world.
In short: Mitsubishi’s finally getting serious about EVs, and if this new SUV lives up to its potential, it might just put the brand back on your radar.

Read more
Toyota unveils 2026 bZ: A smarter, longer-range electric SUV
toyota bz improved bz4x 2026 0007 1500x1125

Toyota is back in the electric SUV game with the 2026 bZ, a major refresh of its bZ4X that finally delivers on two of the biggest demands from EV drivers: more range and faster charging.
The headline news is the improved driving range. Toyota now estimates up to 314 miles on a single charge for the front-wheel-drive model with the larger 74.7-kWh battery—about 60 miles more than the outgoing bZ4X. All-wheel-drive variants also get a boost, with up to 288 miles of range depending on trim.
Charging speeds haven’t increased in terms of raw kilowatts (still capped at 150 kW for DC fast charging), but Toyota has significantly improved how long peak speeds are sustained. With preconditioning enabled—especially helpful in colder weather—the new bZ can charge from 10% to 80% in about 30 minutes. Also new: Plug and Charge support for automatic payment at compatible stations and full adoption of the North American Charging Standard (NACS), meaning access to Tesla Superchargers will be standard by 2026.
Under the hood, or rather the floor, Toyota has swapped in higher-performance silicon carbide components to improve efficiency and power delivery. The AWD version now produces up to 338 horsepower and sprints from 0–60 mph in a brisk 4.9 seconds.
Toyota didn’t stop at just the powertrain. The exterior has been cleaned up, with body-colored wheel arches replacing the black cladding, and a sleeker front fascia. Inside, a larger 14-inch touchscreen now houses climate controls, giving the dash a more refined and less cluttered appearance. There’s also more usable storage thanks to a redesigned center console.
With the 2026 bZ, Toyota seems to be responding directly to critiques of the bZ4X. It’s faster, more efficient, and more driver-friendly—finally bringing Toyota’s EV efforts up to speed.

Read more