Skip to main content

A.I. translation tool sheds light on the secret language of mice

Breaking the communication code

Ever wanted to know what animals are saying? Neuroscientists at the University of Delaware have taken a big leap forward in decoding the sounds made by one particular animal in a way that takes us a whole lot closer than anyone has gotten so far. The animal in question? The humble mouse.

To study mouse vocalizations, the team gathered data as groups of four mice — two males and two females — interacted. They interacted for five hours at a time in a chamber that was kitted out with eight microphones and a video camera. In total, the researchers recorded encounters between a total of 44 mice. Starting with the enormous amounts of ensuing video and audio data, the researchers then used machine learning A.I. to develop a system that’s able to connect specific sounds with distinct animal behaviors. In short, it could work out which mouse was squeaking, where, and in what scenario.

“To link mouse vocalizations to specific actions, we needed multiple technological advances,” University of Delaware neuroscientist Joshua Neunuebel told Digital Trends. “First, we needed to be able to assign specific vocalizations to individual mice that were socially interacting. To do this, we developed a sound source localization system that simultaneously recorded mouse ultrasonic vocalizations on eight different microphones, as well as the position of the mice with a video camera.”

The combination of microphones and camera allowed the team to estimate the location of where a particular vocal signal was emitted and then assign the signal to a specific mouse. Once they were able to assign vocalizations to specific animals, the team used an unsupervised learning algorithm that groups items with similar features to categorize them. Finally, they used a tool called JAABA, the Janelia Automatic Animal Behavior Annotator, to automatically extract specific social behaviors with high fidelity.

“It’s not necessarily a translational tool per se, but it’s a tool that helps us interpret mouse social behaviors,” Neunuebel said. “However, this being said, mice are good models for understanding the neural basis of social behavior, which may ultimately shed light upon how the brain circuitry of humans is functioning.”

The work is the subject of two new papers published in the journals Nature Neuroscience and Scientific Reports. Both papers report on different aspects of the work on how communication shapes social behavior and the neural networks that encode this information.

As Neunuebel says, they haven’t developed a full-fledged human-to-mouse translation tool. Nonetheless, this work — alongside similar research into communications of animals like dolphins — certainly brings us closer to understanding the subtleties of animal chat.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Nvidia lowers the barrier to entry into A.I. with Fleet Command and LaunchPad
laptop running Nvidia Fleet Command software.

Nvidia is expanding its artificial intelligence (A.I.) offerings as part of its continued effort to "democratize A.I." The company announced two new programs today that can help businesses of any size to train and deploy A.I. models without investing in infrastructure. The first is A.I. LaunchPad, which gives enterprises access to a stack of A.I. infrastructure and software, and the second is Fleet Command, which helps businesses deploy and manage the A.I. models they've trained.

At Computex 2021, Nvidia announced the Base Command platform that allows businesses to train A.I. models on Nvidia's DGX SuperPod supercomputer.  Fleet Command builds on this platform by allowing users to simulate A.I. models and deploy them across edge devices remotely. With an Nvidia-certified system, admins can now control the entire life cycle of A.I. training and edge deployment without the upfront cost.

Read more
Can A.I. beat human engineers at designing microchips? Google thinks so
google artificial intelligence designs microchips photo 1494083306499 e22e4a457632

Could artificial intelligence be better at designing chips than human experts? A group of researchers from Google's Brain Team attempted to answer this question and came back with interesting findings. It turns out that a well-trained A.I. is capable of designing computer microchips -- and with great results. So great, in fact, that Google's next generation of A.I. computer systems will include microchips created with the help of this experiment.

Azalia Mirhoseini, one of the computer scientists of Google Research's Brain Team, explained the approach in an issue of Nature together with several colleagues. Artificial intelligence usually has an easy time beating a human mind when it comes to games such as chess. Some might say that A.I. can't think like a human, but in the case of microchips, this proved to be the key to finding some out-of-the-box solutions.

Read more
Read the eerily beautiful ‘synthetic scripture’ of an A.I. that thinks it’s God
ai religion bot gpt 2 art 4

Travis DeShazo is, to paraphrase Cake’s 2001 song “Comfort Eagle,” building a religion. He is building it bigger. He is increasing the parameters. And adding more data.

The results are fairly convincing, too, at least as far as synthetic scripture (his words) goes. “Not a god of the void or of chaos, but a god of wisdom,” reads one message, posted on the @gods_txt Twitter feed for GPT-2 Religion A.I. “This is the knowledge of divinity that I, the Supreme Being, impart to you. When a man learns this, he attains what the rest of mankind has not, and becomes a true god. Obedience to Me! Obey!”

Read more