On Tuesday, DeepMind announced a long-term project that will see the company’s machine-learning algorithms parse “millions” of eye scans to tease out early warning signs that human doctors might otherwise miss.
The new project, which is based out of the U.K.’s Moorfields Eye Hospital in east London, is the fruit of DeepMind’s ongoing partnership — dubbed DeepMind Health — with the country’s National Health Service. In February, the firm launched Streams, an iPhone app for the Royal Free Hospital in London which helps to inform doctors and nurses of kidney patient complications that might interfere with particular sorts of treatment. And in March, it announced plans to launch Hark, a task management start-up that it acquired earlier this year, in London hospitals.
But the Moorsfields program is the first to leverage DeepMind’s machine-learning smarts, and also the first to focus on “purely medical research.”
The firm’s computers will analyze millions of retinal scans collected from the institute’s patients. DeepMind’s researchers will then use those scans to train algorithms to identify early warning signs for chronic diseases like macular degeneration and diabetic retinopathy, according to DeepMind co-founder Mustafa Suleyman.
The impetus was a request by Pears Keane, a consultant opthalmologist at Moorsfields who became intrigued by the health implications of artificial intelligence after reading about DeepMind’s early successes with Atari video games. “I’d been reading about deep learning and the success that technology had in image recognition,” Keane told The Guardian. “Within a couple of days I got in touch with [Suleyman], and he replied.”
The “incredibly detailed” records provide invaluable source material for the outfit’s neural networks, Suleyman told The Guardian. “There’s so much at stake, particularly with diabetic retinopathy,” he said. “If you have diabetes you’re 25 times more likely to go blind. If we can detect this, and get in there as early as possible, then 98 percent of the most severe visual loss might be prevented.”
Professor Peng Tee Khaw said the collaboration, which has the support of the royal National Institute of Blind people (RNIB) and charities such as the Macular Society, is not necessarily intended to remove doctors from the prognostic process. Rather it is to make their day-to-day jobs easier by producing yet another resource upon which they can draw — a body of medical insights that might otherwise take years to compile.
“It takes me my whole life experience to follow one patient’s history. And yet patients rely on my experience to predict their future,” he told The Guardian. “If we could use machine-assisted deep learning, we could be so much better at doing this, because then I could have the experience of 10,000.”
Artificial neural networks like the products of DeepMind’s research have already shown great promise in the field of healthcare. NVIDIA recently announced a partnership with Massachusetts General Hospital to apply artificial intelligence techniques to the detection and treatment of diseases, and one neural network model, the subject of a 2012 study, correctly identified coronary artery disease with 91.2-percent accuracy (others have isolated symptomatic patterns in diseases from cancer to diabetes).
“[Machine learning can] improve patient safety, quality of care, reduce medical costs and save lives,” wrote former UW Medicine Health System researcher Peter Ghavami.
But their application has raised privacy concerns. After it was revealed that DeepMind Health would grant the Google-owned firm uninhibited access to 1.6 million NHS patient records from London’s Royal Free Hospital, Chase Farm, and Barnet hospitals over the past five years and until 2017, some privacy advocates protested. The Information Commissioner’s Office, the U.K’s data protection watchdog, began investigating the arrangement in May.
DeepMind, for its part, said that any sensitive data in the course of research is being transmitted securely. “As Googlers, we have the very best privacy and secure infrastructure for managing the most sensitive data in the world,” Suleyman told The Guardian. But it also emphasized that participation in the program is voluntary.
“Patients can opt out of any data-sharing system by emailing the Trust’s data protection officer,” according to a Q&A on Moorfield’s website.
- Machine learning? Neural networks? Here’s your guide to the many flavors of A.I.
- JLABS injects some tech into the medical industry
- A.I. can do almost anything now, but here are 6 things machines still suck at
- An A.I. cracks the internet’s squiggly letter bot test in 0.5 seconds
- Moxi the ‘friendly’ hospital robot wants to help nurses, not replace them