Skip to main content

IBM’s new A.I. predicts chemical reactions, could revolutionize drug development

“Found in Translation”: Predicting Outcomes of Complex Organic Chemistry Reactions
From building the Deep Blue computer that beat Garry Kasparov at chess to the Watson artificial intelligence (A.I.) that won Jeopardy, IBM has been responsible for some high-profile public demonstrations of A.I. in action. Its latest showcase is less high concept, but potentially far more transformative — applying machine learning technology to the subject of organic chemistry.

As described in a new research paper, the A.I. chemist is able to predict chemical reactions in a way that could be incredibly important for fields like drug discovery. To do this, it uses a highly detailed data set of knowledge on 395,496 different reactions taken from thousands of research papers published over the years.

Teo Laino, one of the researchers on the project from IBM Research in Zurich, told Digital Trends that it is a great example of how A.I. can draw upon large quantities of knowledge that would be astonishingly difficult for a human to master — particularly when it needs to be updated all the time.

“When I was a student, it was still possible to spend one afternoon a week in the library, and to have an overview of all the articles that were being officially published in journals,” he said. “Nowadays that is nearly impossible — even if you use filters to make sure that every article is relevant to me, there is just not enough time. A system that can leverage a big mass of information in organic chemistry is therefore incredibly useful. That was the motivation from an inspirational point of view.”

The IBM project approaches the subject of organic chemistry in a slightly unusual way — by modelling reaction predictions on algorithms more commonly used for carrying out Google Translate-style machine translation. By learning the “syntax” of reactions, it is able to predict the correct outcome 80 percent of the time. While that’s not perfect, it’s nonetheless an incredibly useful tool for cutting down on the amount of time required to research the millions of chemical reactions that have not previously been documented.

“Whenever you talk about AI systems, people have fears about being replaced,” Laino said. “That’s not the case here. The way we envisage this being used, whether it’s academic or commercial application, is by augmenting the abilities of human beings.”

At present, the tool is not publicly available, although that will change in the early part of the new year. For now, you can register your interest online, which will ensure you receive a notification as soon as the service goes live. “The plan is to make it available in three months time, and definitely before the end of the first quarter next year,” said Laino.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Why teaching robots to play hide-and-seek could be the key to next-gen A.I.
AI2-Thor multi-agent

Artificial general intelligence, the idea of an intelligent A.I. agent that’s able to understand and learn any intellectual task that humans can do, has long been a component of science fiction. As A.I. gets smarter and smarter -- especially with breakthroughs in machine learning tools that are able to rewrite their code to learn from new experiences -- it’s increasingly widely a part of real artificial intelligence conversations as well.

But how do we measure AGI when it does arrive? Over the years, researchers have laid out a number of possibilities. The most famous remains the Turing Test, in which a human judge interacts, sight unseen, with both humans and a machine, and must try and guess which is which. Two others, Ben Goertzel’s Robot College Student Test and Nils J. Nilsson’s Employment Test, seek to practically test an A.I.’s abilities by seeing whether it could earn a college degree or carry out workplace jobs. Another, which I should personally love to discount, posits that intelligence may be measured by the successful ability to assemble Ikea-style flatpack furniture without problems.

Read more
Scientists are using A.I. to create artificial human genetic code
Profile of head on computer chip artificial intelligence.

Since at least 1950, when Alan Turing’s famous “Computing Machinery and Intelligence” paper was first published in the journal Mind, computer scientists interested in artificial intelligence have been fascinated by the notion of coding the mind. The mind, so the theory goes, is substrate independent, meaning that its processing ability does not, by necessity, have to be attached to the wetware of the brain. We could upload minds to computers or, conceivably, build entirely new ones wholly in the world of software.

This is all familiar stuff. While we have yet to build or re-create a mind in software, outside of the lowest-resolution abstractions that are modern neural networks, there are no shortage of computer scientists working on this effort right this moment.

Read more
The BigSleep A.I. is like Google Image Search for pictures that don’t exist yet
Eternity

In case you’re wondering, the picture above is "an intricate drawing of eternity." But it’s not the work of a human artist; it’s the creation of BigSleep, the latest amazing example of generative artificial intelligence (A.I.) in action.

A bit like a visual version of text-generating A.I. model GPT-3, BigSleep is capable of taking any text prompt and visualizing an image to fit the words. That could be something esoteric like eternity, or it could be a bowl of cherries, or a beautiful house (the latter of which can be seen below.) Think of it like a Google Images search -- only for pictures that have never previously existed.
How BigSleep works
“At a high level, BigSleep works by combining two neural networks: BigGAN and CLIP,” Ryan Murdock, BigSleep’s 23-year-old creator, a student studying cognitive neuroscience at the University of Utah, told Digital Trends.

Read more