Truly creative A.I. is just around the corner. Here’s why that’s a big deal

A.I.
Sean Gallup/Getty Images

Joe Kennedy, father of the late President John F. Kennedy, once said that, when shoeshine boys start giving you stock tips, the financial bubble is getting too big for its own good.

By that same logic, when Hollywood actors start tweeting about a once-obscure part of artificial intelligence (A.I.), you know that something big is happening, too. That’s exactly what occurred recently when Zach Braff, the actor-director still best known for his performance as J.D. on the medical comedy series Scrubs, recorded himself reading a Scrubs-style monolog written by an A.I.

“What is a hospital?” Braff reads, adopting the thoughtful tone J.D. used to wrap up each episode in the series. “A hospital is a lot like a high school: the most amazing man is dying, and you’re the only one who wants to steal stuff from his dad. Being in a hospital is a lot like being in a sorority. You have greasers and surgeons. And even though it sucks about Doctor Tapioca, not even that’s sad.”

Today’s machine creativity typically involves humans making some of the decisions

Yes, it’s nonsense — but it’s charming nonsense. Created by Botnik Studios, who recently used the same same statistical predictive tools to write an equally bonkers new Harry Potter story, the A.I. mimics the writing style of the show’s real scripts. It sounds right enough to be recognizable but wrong enough to be obviously the work of a silly machine, like the classic anecdote about the early MIT machine translation software which translated the Biblical saying “The spirit is willing, but the flesh is weak” into Russian and back again, ending up with “The whisky is strong, but the meat is rotten.”

As Braff’s publicizing of the Scrubs-bot shows, the topic of computational creativity is very much in right now. Once the domain of a few lonely researchers, trapped on the fringes of computer science and the liberal arts, the question of whether a machine can be creative is everywhere. Alongside Botnik’s attempts at Harry Potter and Scrubs, we’ve recently written about a recurrent neural network (RNN) that took a stab at writing the sixth novel in the Song of Ice and Fire series, better known to TV fans as Game of Thrones. The RNN was trained for its task by reading and analyzing the roughly 5,000 pages of existing novels in the series.

Larger companies like Google have gotten in on the act, too, with its Deep Dream project, which purposely magnifies some of the recognition errors in Google’s deep learning neural networks to create wonderfully trippy effects.

Right now, we’re at the “laughter” stage of computational creativity for the most part. That doesn’t have to mean outright mocking A.I.’s attempts to create, but it’s extremely unlikely that, say, an image generated by Google’s Deep Dream will hang in an art gallery any time soon — even if the same image painted by a person may be taken more seriously.

It’s fair to point out that today’s machine creativity typically involves humans making some of the decisions, but the credit isn’t split between both in the same way that a movie written by two authors would be. Rightly or wrongly, we give A.I. the same amount of credit in these scenarios that we might give to the typewriter that “War and Peace” was written on. In other words, very little.

Right now, we’re in the “laughter” stage of AI creativity, but that may change soon. 

But that could change very soon. Because computational creativity is doing a whole lot more than generating funny memes and writing parody scripts. NASA, for example, has employed evolutionary algorithms, which mimic natural selection in machine form, to design satellite components. These components work well — although their human “creators” are at a loss to explain exactly how.

Legal firms, meanwhile, are using A.I. to formulate and hone new arguments and interpretations of the law, which could be useful in a courtroom. In medicine, the U.K.’s University of Manchester is using a robot called EVE to formulate hypotheses for future drugs, devise experiments to test these theories, physically carry out these experiments, and then interpret the results.

IBM’s “Chef Watson” utilizes A.I. to generate its own unique cooking recipes, based on a knowledge of 9,000 existing dishes and an awareness of which chemical compounds work well together. The results are things like Turkish-Korean Caesar salads and Cuban lobster bouillabaisse that no human chef would ever come up with, but which taste good nevertheless.

In another domain, video game developers Epic Stars recently used a deep learning A.I. to compose the main theme for its new game Pixelfield, which was then performed by a live orchestra.

Finally, newspapers like the Washington Post are eschewing sending human reporters to cover events like the Olympics, in place of letting machines do the job. To date, the newspaper’s robo-journalist has written close to 1,000 articles.

Which brings us to our big point: Should a machine’s ability to be creative serve as the ultimate benchmark for machine intelligence? Here in 2017, brain-inspired neural networks are getting bigger, better, and more complicated all the time, but we still don’t have an obvious test to discern when a machine is finally considered intelligent.

We still don’t have definitive method for discerning when a machine is intelligent.

While it’s not a serious concern of most A.I. researchers, the most famous test of machine intelligence remains the Turing Test, which suggests that if a machine is able to fool us into thinking it’s intelligent, we must therefore agree that it is intelligent. The result, unfortunately, is that machine intelligence is reduced to the level of an illusionist’s trick — attempting to pull the wool over the audience’s eyes rather than actually demonstrating that a computer can have a mind.

An alternative approach is an idea called the Lovelace Test, named after the pioneering computer programmer Ada Lovelace. Appropriately enough, Ada Lovelace represented the intersection of creativity and computation — being the daughter of the Romantic poet Lord Byron, as well as working alongside Charles Babbage on his ill-fated Analytical Engine in the 1800s. Ada Lovelace was impressed by the idea of building the Analytical Engine, but argued that it would never be considered capable of true thinking, since it was only able to carry out pre-programmed instructions. As she said, “The Analytical Engine has no pretensions whatever to originate anything,’ she famously wrote. ‘It can do [only] whatever we know how to order it to perform.”

The broad idea of the Lovelace Test involves three separate parts: the human creator, the machine component, and the original idea. The test is passed only if the machine component is able to generate an original idea, without the human creator being able to explain exactly how this has been achieved. At that point, it is assumed that a computer has come up with a spontaneous creative thought. Mark Riedl, an associate professor of interactive computing at Georgia Tech, has proposed a modification of the test in which certain constraints are given — such as “create a story in which a boy falls in love with a girl, aliens abduct the boy, and the girl saves the world with the help of a talking cat.”

“Where I think the Lovelace 2.0 test plays a role is verifying that novel creation by a computational system is not accidental,” Riedl told Digital Trends. “The test requires understanding of what is being asked, and understanding of the semantics of the data it is drawing from.”

It’s an intriguing thought experiment. This benchmark may be one that artificial intelligence has not yet cracked, but surely it’s getting closer all the time. When machines can create patentable technologies, dream up useful hypotheses, and potentially one day write movie scripts that will sell tickets to paying audiences, it’s difficult to call their insights accidental.

To coin a phrase often attributed to Mahatma Gandhi, “First they ignore you, then they laugh at you, then they fight you, then you win.” Computational creativity has been ignored. Right now, either fondly or maliciously, it is being laughed at. Next it will start fighting our preconceptions — such as the kinds of jobs which qualify as creative, which are the roles we are frequently assured are safe from automation.

And after that? Just maybe it can win.

Emerging Tech

Rise of the Machines: Here’s how much robots and A.I. progressed in 2018

2018 has generated no shortage of news, and the worlds of A.I. and robotics are no exception. Here are our picks for the most exciting, game changing examples of both we saw this year.
Mobile

5G’s arrival is transforming tech. Here’s everything you need to know to keep up

It has been years in the making, but 5G is finally becoming a reality. While 5G coverage is still extremely limited, expect to see it expand in 2019. Not sure what 5G even is? Here's everything you need to know.
Smart Home

This A.I.-enabled tech brings cutting-edge automation to grocery stores

Takeoff Technologies is working to make grocery deliveries fast, accurate, and convenient using A.I.-enabled technology to augment robotic grocery orders that can be completed in minutes.
Mobile

Midrange phones can’t do A.I., but MediaTek’s P90 chip aims to change that

MediaTek has announced the Helio P90 mobile processor, which it says will bring the best A.I. features we see on high-end smartphones, to the mid-range. We spoke to the company about the chip.
Emerging Tech

Awesome Tech You Can’t Buy Yet: Booze-filled ski poles and crypto piggy banks

Check out our roundup of the best new crowdfunding projects and product announcements that hit the web this week. You may not be able to buy this stuff yet, but it sure is fun to gawk!
Emerging Tech

Bright ‘hyperactive’ comet should be visible in the sky this weekend

An unusual green comet, 46P/Wirtanen, will be visible in the night sky this month as it makes its closest approach to Earth in 20 years. It may even be possible to see the comet without a telescope.
Emerging Tech

Meet the MIT scientist who’s growing semi-sentient cyborg houseplants

Elowan is a cybernetic plant that can respond to its surroundings. Tethered by a few wires and silver electrodes, the plant-robot hybrid can move in response to bioelectrochemical signals that reflect the plant’s light demands.
Emerging Tech

Gorgeous images show storms and cloud formations in the atmosphere of Jupiter

NASA's Juno mission arrived at Jupiter in 2016 and has been collecting data since then. NASA has shared an update on the progress of the mission as it reaches its halfway point, releasing stunning images of the planet as seen from orbit.
Emerging Tech

Beautiful image of young planets sheds new light on planet formation

Researchers examining protoplanetary disks -- the belts of dust that eventually form planets -- have shared fascinating images of the planets from their survey, showing the various stages of planet formation.
Emerging Tech

Delivery robot goes up in flames while out and about in California

A small meal-delivery robot suddenly caught fire in Berkeley, California, on Friday. The blaze was quickly tackled and no one was hurt, but the incident is nevertheless a troubling one for the fledgling robot delivery industry.
Emerging Tech

High-tech dancing robot turns out to be a guy in a costume

A Russian TV audience was impressed recently by an adult-sized "robot" that could dance and talk. But when some people began pointing out that its actions were a bit odd, the truth emerged ... it was a fella in a robot suit.
Emerging Tech

MIT’s smart capsule could be used to release drugs in response to a fever

Researchers have developed a 3D-printed capsule which can monitor patients' vital signs, transmit this information to a connected device, and release drugs in response to symptoms.
Emerging Tech

‘Crop duster’ robot is helping reseed the Great Barrier Reef with coral

In a world first, an undersea robot has delivered microscopic coral larvae to the Great Barrier Reef. Meet Larvalbot: the robot "crop duster" which dispenses coral babies on troubled reefs.
Emerging Tech

Self-driving dirt rally vehicle offers crash course in autonomous car safety

Georgia Tech's AutoRally initiative pushes self-driving cars to their limit by getting scaled-down autonomous vehicles to drive really, really fast and aggressively on dirt roads. Here's why.