Skip to main content

One fish, two fish: A.I. labels wildlife photos to boost conservation

The wilderness is vast and varied, home to millions of animal species. For ecologists, identifying and describing those animals is key to successful research. That can prove to be a tall order — but artificial intelligence may be able help.

In a new report out this week, researchers show how they trained a deep learning algorithm to automatically identify, count, and characterize animals in images. The system used photographs captured from motion-sensing camera traps, which snap pictures of the animals without seriously disturbing them.

“We have shown that we can use computers to automatically extract information from wildlife photos, such as species, number of animals, and what the animals are doing.” Margaret Kosmala, a research associate at Harvard University, told Digital Trends. “What’s novel is that this is the first time it’s been shown that it’s possible to do this as accurately as humans. Artificial intelligence has been getting good at recognizing things in the human domain — human faces, interior spaces, specific objects if well-positioned, streets, and so forth. But nature is messy and in this set of photos, the animals are often only partially in the photo or very close or far away or overlapping. As an ecologist, I find this very exciting because it gives us a new way to use technology to study wildlife over broad areas and long time spans.”

The researchers used images captured and collected by Snapshot Serengeti, a citizen science project with stealth wildlife cameras spread throughout Tanzania. From elephant to cheetahs, Snapshot Serengeti has gathered millions of wildlife photographs. But the images themselves aren’t as valuable as the data contained within the frame, including details like number and type of animals.

Automated identification and descriptions has a lot of benefits for ecologists. For years, Snapshot Serengeti used to crowdsource the task of describing wildlife images. With the help of some 50,000 volunteers, the group labeled over three million images. It was this treasure trove of labeled imagery that the researchers used to train their algorithm.

Now, rather than turn to citizen scientists, researchers may be able to assign the laborious task to an algorithm, which can quickly process the photographs and label their key details.

“Any scientific research group or conservation group that is trying to understand and protect a species or ecosystem can deploy motion-sensor cameras in that ecosystem,” Jeff Clune, a professor of computer science at the University of Wyoming, said. “For example, if you are studying jaguars in a forest, you can put out a network of motion-sensor cameras along trails. The system will then automatically take pictures of the animals when they move in front of the cameras, and then the A.I. technology will count the number of animals that have been seen, and automatically delete all the images that were taken that do not have animals in them, which turns out to be a lot because motion-sensor cameras are triggered by wind, leaves falling, etcetera.”

A paper detailing the research was published this week in the journal the Proceedings of the National Academy of Sciences.

Editors' Recommendations

Finishing touch: How scientists are giving robots humanlike tactile senses
A woman's hand is held by a robot's hand.

There’s a nightmarish scene in Guillermo del Toro’s 2006 movie Pan's Labyrinth in which we are confronted by a sinister humanoid creature called the Pale Man. With no eyes in his monstrous, hairless head, the Pale Man, who resembles an eyeless Voldemort, sees with the aid of eyeballs embedded in the palms of his hands. Using these ocular-augmented appendages, which he holds up in front of his eyeless face like glasses, the Pale Man is able to visualize and move through his surroundings.

This to a degree describes work being carried out by researchers at the U.K’.s Bristol Robotics Laboratory -- albeit without the whole terrifying body horror aspect. Only in their case, the Pale Man substitute doesn’t simply have one eyeball in the palm of each hand; he’s got one on each finger.

Read more
Spot’s latest robot dance highlights new features
Spot robot dancing.

Boston Dynamics’ Spot robot has hit the dance floor again in a new video that seeks to highlight recent improvements made to the quadruped contraption.

No Time to Dance | Boston Dynamics

Read more
James Webb Space Telescope fully aligned and capturing crisp images
Engineering images of sharply focused stars in the field of view of each instrument demonstrate that the telescope is fully aligned and in focus.

Since the launch of the James Webb Space Telescope in December last year, engineers have been working to deploy the telescope's hardware, then align both its mirrors and its instruments. Now, that months-long process is complete, and the telescope is confirmed to be fully aligned. NASA and the European Space Agency have shared an image showing the sharpness check of all of Webb's instruments, showing that they are all crisp and properly focused.

"Engineering images of sharply focused stars in the field of view of each instrument demonstrate that the telescope is fully aligned and in focus," the European Space Agency writes. "For this test, Webb pointed at part of the Large Magellanic Cloud, a small satellite galaxy of the Milky Way, providing a dense field of hundreds of thousands of stars across all the observatory’s sensors. The sizes and positions of the images shown here depict the relative arrangement of each of Webb’s instruments in the telescope’s focal plane, each pointing at a slightly offset part of the sky relative to one another."

Read more