Skip to main content

Google’s A.I. can now detect breast cancer more accurately than doctors can

Google’s artificial intelligence technology is able to spot signs of breast cancer in women with more accuracy than doctors, according to a new study. 

The study, published on Wednesday, January 1, in the scientific journal Nature, found that by using A.I. technology, there was a reduction in false positives and false negatives when it came to diagnosing forms of breast cancer.

A.I. technology was used to look the mammograms from more than 15,000 women in the United States, and over 76,000 women in the United Kingdom. The program was able to reduce false positive by 5.7% for women in the U.S. and 1.2% for those in the U.K. False negatives were reduced 9.4% in the U.S. and by 2.7% in the U.K. 

The advanced A.I. system proved to be more accurate than human experts with knowledge of a patient’s history, even if doctors did a second reading of mammogram results, according to the study. 

“The performance of even the best clinicians leaves room for improvement,” the study reads. “A.I. may be uniquely poised to help with this challenge.”

The study was a collaboration between Google Health, Northwestern University, Cancer Research U.K. Imperial Centre, and Royal Surrey County Hospital. 

“Looking forward to future applications, there are some promising signs that the model could potentially increase the accuracy and efficiency of screening programs, as well as reduce wait times and stress for patients,” wrote Google Health’s Technical Lead, Shravya Shetty and Product Manager, Daniel Tse, in a blog post announcing the initial findings. 

The use of A.I. technology to better detect screenings could be groundbreaking, considering that one in eight U.S. women will develop breast cancer in their lifetime, according to the American Cancer Society. 

Aside from health applications, Google’s A.I. technology is also being applied for identifying different species of animals in a new wildlife conservation program that aims to better protect and monitor animals throughout the world. 

The ultimate goal for the data is to identify different species of animals faster. According to the blog, human experts can look through 300 to about 1,000 images per hour, but Google’s A.I. technology can analyze 3.6 million photos an hour and automatically classify an animal. 

Allison Matyus
Former Digital Trends Contributor
Allison Matyus is a general news reporter at Digital Trends. She covers any and all tech news, including issues around social…
Deep-learning A.I. is helping archaeologists translate ancient tablets
DeepScribe project 1

Deep-learning artificial intelligence is helping grapple with plenty of problems in the modern world. But it also has its part to play in helping solve some ancient problems as well -- such as assisting in the translation of 2,500-year-old clay tablet documents from Persia's Achaemenid Empire.

These tablets, which were discovered in modern-day Iran in 1933, have been studied by scholars for decades. However, they’ve found the translation process for the tablets -- which number in the tens of thousands -- to be laborious and prone to errors. A.I. technology can help.

Read more
Deep learning A.I. can imitate the distortion effects of iconic guitar gods
guitar_amp_in_anechoic_chamber_26-1-2020_photo_mikko_raskinen_006 1

Music making is increasingly digitized here in 2020, but some analog audio effects are still very difficult to reproduce in this way. One of those effects is the kind of screeching guitar distortion favored by rock gods everywhere. Up to now, these effects, which involve guitar amplifiers, have been next to impossible to re-create digitally.

That’s now changed thanks to the work of researchers in the department of signal processing and acoustics at Finland’s Aalto University. Using deep learning artificial intelligence (A.I.), they have created a neural network for guitar distortion modeling that, for the first time, can fool blind-test listeners into thinking it’s the genuine article. Think of it like a Turing Test, cranked all the way up to a Spınal Tap-style 11.

Read more
Barnes & Noble used A.I. to make classic books more diverse. It didn’t go well
Barnes and Noble Diverse Editions

For Black History Month, Barnes & Noble created covers of classic novels with the protagonists re-imagined as people of color. Then it quickly canceled its planned Diverse Editions of 12 books, including Emma, The Secret Garden, and Frankenstein amid criticism that it clumsily altered books by mostly white authors instead of promoting writers of color. The project used artificial intelligence to scan 100 books for descriptions of major characters, and artists created covers depicting Alices, Romeos, and Captain Ahabs of various ethnicities.

"We acknowledge the voices who have expressed concerns about the Diverse Editions project at our Barnes & Noble Fifth Avenue store and have decided to suspend the initiative," Barnes & Noble announced in a statement. The company partnered with with Penguin Random House and advertising agency TBWA/CHIAT/DAY to create the books.

Read more