Researchers combat uncanny valley in animations by applying neural network

The term “uncanny valley” refers to that feeling of discomfort that arises when viewing things like robotic faces and animations that are not quite right. The closer something is to being lifelike without quite being perfect, the more uncomfortable the experience.

The technology industry has been working diligently at solving the problem of the uncanny valley for years now, without complete success. One recent effort in the field of animation aims to bring more natural motion via neural network technology, TechCrunch reports.

The research is being conducted jointly by the University of Edinburgh and Method Studios and it focuses on replacing huge libraries of custom animations with more fluid animations created by a machine learning algorithm. The idea is that the algorithm can generate motion more smoothly using a phase function that avoids things like causing an animated figure to take a step when it’s actually in the process of jumping.

researchers apply neural network to making motion in animations more realistic animation algorithm

According to the researchers, “Since our method is data-driven, the character doesn’t simply play back a jump animation, it adjusts its movements continuously based on the height of the obstacle.” While the algorithm is not yet ready for use in games and other applications, it is a promising approach for creating more intelligent animation processes.

The details of how the system achieves smooth and more natural human motion, and how it might be applied to any character, are quite technical. However, the concept is straightforward: Use an intelligent system that can generate any animated motion on demand rather than selecting from pre-generated animations that might not perfect fit the specific environment and situation.

You can learn more about the new technology when Jun Saito of Method Studios and some other researchers show off their work at the upcoming Siggraph event in Los Angeles on July 30 through August 3. The presentation’s abstract can be seen here, and the full paper is available with all of the technical details.