Skip to main content

Artificial intelligence can now identify a bird just by looking at a photo

Image used with permission by copyright holder

Artificial intelligence technology has proven itself useful in many different areas, and now birdwatching has gotten the A.I. treatment. A new A.I. tool can identify up to 200 different species of birds just by looking at one photo. 

The technology comes from a team at Duke University that used over 11,000 photos of 200 bird species to teach a machine to differentiate them. The tool was shown birds from ducks to hummingbirds and was able to pick out specific patterns that match a particular species of bird. 

“Along the way, it spits out a series of heat maps that essentially say: ‘This isn’t just any warbler. It’s a hooded warbler, and here are the features — like its masked head and yellow belly — that give it away,’” wrote Robin Smith, senior science writer in Duke’s communications department, in a blog post about the new technology. 

The researchers included Duke computer science Ph.D. student Chaofan Chen, Duke undergraduate Oscar Li, team members from the Prediction Analysis Lab, and Duke professor Cynthia Rudin. The team found that the machine learning correctly identified bird species 84% of the time. 

Essentially, the technology is similar to facial-recognition software, which remembers faces on social media sites to suggest tags or to identify people in surveillance videos. Unlike other controversial facial-recognition software, however, the technology from Duke is meant to be transparent in how the machine learns identifiable features. 

“[Rudin] and her lab are designing deep learning models that explain the reasoning behind their predictions, making it clear exactly why and how they came up with their answers. When such a model makes a mistake, its built-in transparency makes it possible to see why,” the blog post reads. 

The hope is to take this technology to another level so it can be used to classify areas in medical images, such as identifying a lump in a mammogram. 

“It’s case-based reasoning,” Rudin said. “We’re hoping we can better explain to physicians or patients why their image was classified by the network as either malignant or benign.”

Digital Trends reached out to Duke University to find out what other ways the new tool can be used for, but we haven’t heard back. 

Editors' Recommendations

Allison Matyus
Former Digital Trends Contributor
Allison Matyus is a general news reporter at Digital Trends. She covers any and all tech news, including issues around social…
Inside the rapidly escalating war between deepfakes and deepfake detectors
Facebook Deepfake Challenge

Imagine a twisty-turny movie about a master criminal locked in a war of wits with the world’s greatest detective.

The criminal seeks to pull off a massive confidence trick, using expert sleight of hand and an uncanny ability to disguise himself as virtually anyone on the planet. He’s so good at what he does that he can make people believe they saw things that never actually happened.

Read more
The BigSleep A.I. is like Google Image Search for pictures that don’t exist yet
Eternity

In case you’re wondering, the picture above is "an intricate drawing of eternity." But it’s not the work of a human artist; it’s the creation of BigSleep, the latest amazing example of generative artificial intelligence (A.I.) in action.

A bit like a visual version of text-generating A.I. model GPT-3, BigSleep is capable of taking any text prompt and visualizing an image to fit the words. That could be something esoteric like eternity, or it could be a bowl of cherries, or a beautiful house (the latter of which can be seen below.) Think of it like a Google Images search -- only for pictures that have never previously existed.
How BigSleep works
“At a high level, BigSleep works by combining two neural networks: BigGAN and CLIP,” Ryan Murdock, BigSleep’s 23-year-old creator, a student studying cognitive neuroscience at the University of Utah, told Digital Trends.

Read more
Clever new A.I. system promises to train your dog while you’re away from home
finding rover facial recognition app dog face big eyes

One of the few good things about lockdown and working from home has been having more time to spend with pets. But when the world returns to normal, people are going to go back to the office, and in some cases that means leaving dogs at home for a large part of the day, hopefully with someone coming into your house to let them out at the midday point.

What if it was possible for an A.I. device, like a next-generation Amazon Echo, to give your pooch a dog-training class while you were away? That’s the basis for a project carried out by researchers at Colorado State University. Initially spotted by Chris Stokel-Walker, author of YouTubers:How YouTube Shook Up TV and Created a New Generation of Stars, and reported by New Scientist, the work involves a prototype device that’s able to give out canine commands, check to see if they’re being obeyed, and then provide a treat as a reward when they are.

Read more