According to a new project by Barcelona’s Computer Vision Center, the answer is a resounding yes. Scientists at the Computer Vision Center have created a virtual city simulation, called SYNTHIA, which they are using to teach autonomous vehicle AIs to be better drivers.
To build the city simulation, researchers used the same Unity engine that has been previously utilized for a wide range of consumer video games. Employing this engine, the team was able to create a realistic cityscape — which they proceeded to populate with inattentive pedestrians, badly-parked buses, and even its own extreme weather system. On top of this they then added the self-driving car AI.
“Our research shows that the gap between virtual and real worlds is getting very small,” German Ros, a research assistant and Deep Learning PhD candidate who worked on the project, tells Digital Trends. “For many perception-related problems, such as semantic segmentation — basic scene understanding — and object recognition, it is now easy to take models trained on a virtual world [and have them] work well in real scenes.”
The advantage of teaching an autonomous car to drive using a virtual world is obvious. While self-driving vehicles are widely considered to be safer drivers than the vast majority of humans, they are still learning how to deal with certain environments. Simulations such as SYNTHIA allow researchers to throw a variety of obstacles at self-driving AIs — ranging from erratic bicyclists to traffic accidents or complex weather patterns — to see how they will cope. These may include statistically rare hazards that are important for a computer to be able to recognize, but which may not come up during even hundreds of hours of training on real roads.
“Virtual environments offer the opportunity of generating the data cases that are most suitable to perform good training,” Ros continues. “The simulator can ‘jump’ from a dark winter scene to a sunny summer scene automatically to maximize the ‘knowledge gain’ of the system. In other words, the system can be set up to give you those images and cases that are [best] suited for your current training state.” In addition, Ros points out that virtual environments are easy to extend by adding new information in the form of objects or situations.
The result, the team hopes, should be that when the self-driving cars eventually hit the road, they shouldn’t be phased by whatever scenario comes their way.
- There’s a good reason this self-driving pod has huge ‘virtual eyes’
- Teaching machines to see illusions may help computer vision get smarter
- Bosch, Daimler team up to deploy autonomous Mercedes-Benz S-Classes in San Jose
- Researchers put A.I. inside a camera lens to compute ‘at the speed of light’
- Lensless cameras could turn windows into sensors, even pointed the ‘wrong’ way