You only need to visit Digital Trends on a day of the week that ends with “day” to see that there’s no shortage of inventors of some stripe doing amazing things with artificial intelligence. But can artificial intelligence itself invent anything? That might sound like an abstract hypothesis, but it’s one that recently landed an official court ruling. The official answer: No, it can’t. But not everyone agrees.
The United Kingdom’s High Court recently dismissed an appeal, claiming that robots cannot be credited as inventors under the Patents Act. Stephen Thaler, the creator of “Creativity Machine” called DABUS, had argued that it had invented a patentable emergency warning light and interlocking food container design. The judge in the case did not agree; arguing that an inventor must be a natural person and not a machine. All settled then, right?
Perhaps not so quickly. While this could be the end of the road for this particular claim from Thaler, such complicated — and important — questions are not easily settled with a gavel. Thaler’s argument is that DABUS is fundamentally different from other A.I. systems.
Illogical steps
“Beginning as a swarm of many artificial neural nets, each containing interrelated patterns spanning some conceptual space, [the DABUS] system uses simple, mathematically expressed learning rules to combine them, with no predetermined objective,” Thaler told Digital Trends. “The system generates a progression of new concepts in the form of chained neural nets, containing the bridging logic between nets, as well as the effects of the base concept, likewise encoded as chains of neural nets. Now taking the form of 2D and 3D geometrical objects these chain-encoded notions may be classified in the same way neural nets have recognized faces and dogs for more than three decades now. Most importantly, if any of these chains incorporates a net containing significant memories — for example, the equivalent of life or death in humans — monitoring nets may trigger the release of simulated neurotransmitters that reinforce or destroy forming chains.”
Got that? Good. Thaler has been arguing about robot invention for years; dating back at least as far as 1996 and an A.I.-aided toothbrush project that became the Oral-B CrossAction toothbrush. A quarter-century later, he’s still battling. And, with credit to Thaler, it’s a topic that’s worth scrutinizing.
Patentese is a painful language to read and try and decipher. The key to whether a patent is granted comes down to the idea of an illogical step. In Europe this is called an “inventive step,” while Americans typically refer to it as “non-obviousness.” The idea, in both instances, is the same: Would an idea have been reached if a creator was to plod along a predictable path and claim the end result as his or her invention? If so, it shouldn’t be awarded a patent, which is reserved for ideas that, in some manner, creatively deviate from what everyone else is doing. This is the essence of creativity — and it’s why, in a larger sense, it matters whether or not a machine can invent something patentable.
“It’s an issue because the law has traditionally held that only a person can invent, but that rule was [made] to protect the rights of human inventors to be acknowledged and to prevent corporate inventors,” Ryan Abbott, professor of law and author of the book The Reasonable Robot: Artificial Intelligence and the Law, told Digital Trends. “It was done without thinking about whether an A.I. could do the sorts of things that human inventors were doing. Now, that has a bearing on whether A.I. output can be protected, when an A.I. is stepping into the shoes of human inventors. That, in turn, influences how A.I. will be developed and used for R&D and whether companies will be able to adopt A.I.-based solutions where they are more efficient means of innovating.”
Humans want to take credit for creations
For the most part, it seems likely that humans are going to want to take credit for machine labor. Owning a valuable patent can be unsurprisingly lucrative — and humans spend money better than machines. Neural networks can — and are — today used to help Hollywood types tweak scripts and decide which projects to greenlight. They’re used to help predict which songs will be hits and, perhaps accordingly, dictate the direction of the writing process. They’ve helped to generate artworks which have sold for enormous amounts at auction. They’ve assisted architects and designers in generating endless variations on a theme to create, say, hundreds of chairs that look similar, but are each slightly different. They’re also increasingly used for helping to hypothesize and test new drug formulations.
However, very few people are rushing to credit A.I.s as serious co-authors, co-designers, co-studio executives, co-pharmacists and the like. A.I. is considered a tool in the same way that paint and gravity are not considered co-artists in a Jackson Pollock drip painting.
As Abbott points out, the rules about who can technically invent something were also created with corporations in mind. A company can be the owner of a patent, but never be listed as an inventor. At present, the only people with an interest, beyond a philosophical one, in getting an A.I. accredited as a creator, are the vectoralist class who create the machines. To be able to sell a piece of software that can assist human employees in a task is one thing. To be able to sell a piece of software that can, itself, originate ideas innovative enough that a court will agree to patent them is something else.
The judge in the DABUS case thinks A.I. (or, at least, this specific A.I.) is not yet at that point. “This is a public perception battle where big money and folk psychology usually prevail,” Thaler suggested. “No, we are not winning [the battle] yet, largely because the world expects this A.I. invention, DABUS, to come out of a big tech company or an Ivy League university, or to use the same methodology promoted by them and their academic cliques, all backed by tremendous Madison Avenue budgets.”
A changing landscape
Without commenting on the specific qualities of DABUS, and whether its inventions are novel enough to receive patents, Thaler appears correct in his belief that to really gain attention, a claim like this would have to be made by a Google DeepMind or similar. But, even then, a patent battle is unlikely to rouse interest in the way that, for instance, a Go-playing robot challenging the world’s best player does. Nonetheless, it is highly significant.
In The Faculty, Robert Rodriguez and Kevin Williamson’s excellent 1990s update of Invasion of the Body Snatchers, one of the characters comments on why aliens would choose to start an invasion of Earth in Ohio of all places. “If you were going to take over the world, would you blow up the White House Independence Day style, or sneak in through the back door?” quips Casey, the geeky character played by Elijah Wood.
In this analogy, patent squabbles about A.I.-invented interlocking food containers are just such a backdoor. The question of whether a non-human entity can be granted a patent is to ask whether an A.I. can invent something novel, which is to ask if a machine can create something, which means doing a lot more than just following their programming. Machine learning able to rewrite its own code with new experiences, or reinforcement learning algorithms able to form unusual new strategies to beat classic Atari games, suggests things aren’t quite as clear-cut as they might appear.
Machines aren’t inventing things just yet in the eyes of the law. But the case that they can get stronger every day.