Skip to main content

Could unsupervised A.I. enable autonomous cars to learn as they go?

Cortica Automotive

Most autonomous vehicle tech ventures such as Waymo, GM Cruise and Nvidia rack up miles of deep learning experience to build reliably safe systems for self-driving cars. Cortica and Renesas Electronics‘ entirely different approach focuses on helping cars learn on their own.

Recommended Videos

Cortica, an Israeli company with roots in predictive artificial intelligence based on visual perception, is embedding its latest “Autonomous A.I.” solution on the Renesas R-Car V3H system-on-chip (SoC) solution for self-driving cars.

Referred to by the companies as “unsupervised learning,” Cortica’s autonomous A.I. enables a vehicle to make predictions based on visual data received from forward-facing cameras. According to Cortica, the system uses “‘unsupervised learning’ methodology to mimic the way humans experience and incorporate the world around them.”

The goal is for the car to be able to react to any situation, whether or not the objects or circumstances were previously converted to rules by deep learning A.I. For example, if a mattress flies off the back of a pickup truck in front of you at speed on the highway, would you rather be in an autonomous vehicle managed by a system of rules based on specific experiences or a system that observes and reacts to objects in motion based on how various object classes are likely to move?

According to Cortica, its autonomous A.I. can leverage the system’s relatively low computing demand compared to deep learning systems to achieve greater perception accuracy and performance. Referring to the collaboration with Renesas prior to a demonstration at CES 2019, Cortica stated in a news release:

For the first time, the collaborative effort will introduce a more robust and scalable open-platform perception solution featuring unmatched accuracy and performance rates, faster reaction time, and overall safety upgrades for ADAS. The solution demo by Cortica at CES will demonstrate a new generation of safer, smarter, and more ‘aware’ auto running directly on the Renesas chip with unparalleled execution times. 

One hundred percent predictability is a specious goal for autonomous vehicle systems. Everyone wants error-free performance, but no system will ever have a perfect record. However, as with horseshoes and hand grenades, the closer you get, the greater the result.

Nearly all fatal accidents in the United States involve human error. In the 2016 U.S. Department of Transportation Fatal Traffic Crash Data report, the National Highway Traffic Safety Administration (NHTSA) stated, “NHTSA continues to work closely with its state and local partners, law enforcement agencies, and the more than 350 members of the Road to Zero Coalition to help address the human choices that are linked to 94 percent of serious crashes.”

The NHTSA is an active force in self-driving vehicle development. In September 2016 the agency released its Federal Automated Vehicles Policy.

The DOT fatal crash data report refers to traffic safety goals of autonomous vehicle systems, stating that, “NHTSA also continues to promote vehicle technologies that hold the potential to reduce the number of crashes and save thousands of lives every year, and may eventually help reduce or eliminate human error and the mistakes that drivers make behind the wheel.”

Bruce Brown
Bruce Brown Contributing Editor   As a Contributing Editor to the Auto teams at Digital Trends and TheManual.com, Bruce…
Google’s LaMDA is a smart language A.I. for better understanding conversation
LaMDA model

Artificial intelligence has made extraordinary advances when it comes to understanding words and even being able to translate them into other languages. Google has helped pave the way here with amazing tools like Google Translate and, recently, with its development of Transformer machine learning models. But language is tricky -- and there’s still plenty more work to be done to build A.I. that truly understands us.
Language Model for Dialogue Applications
At Tuesday’s Google I/O, the search giant announced a significant advance in this area with a new language model it calls LaMDA. Short for Language Model for Dialogue Applications, it’s a sophisticated A.I. language tool that Google claims is superior when it comes to understanding context in conversation. As Google CEO Sundar Pichai noted, this might be intelligently parsing an exchange like “What’s the weather today?” “It’s starting to feel like summer. I might eat lunch outside.” That makes perfect sense as a human dialogue, but would befuddle many A.I. systems looking for more literal answers.

LaMDA has superior knowledge of learned concepts which it’s able to synthesize from its training data. Pichai noted that responses never follow the same path twice, so conversations feel less scripted and more responsively natural.

Read more
How the USPS uses Nvidia GPUs and A.I. to track missing mail
A United States Postal Service USPS truck driving on a tree-lined street.

The United States Postal Service, or USPS, is relying on artificial intelligence-powered by Nvidia's EGX systems to track more than 100 million pieces of mail a day that goes through its network. The world's busiest postal service system is relying on GPU-accelerated A.I. systems to help solve the challenges of locating lost or missing packages and mail. Essentially, the USPS turned to A.I. to help it locate a "needle in a haystack."

To solve that challenge, USPS engineers created an edge A.I. system of servers that can scan and locate mail. They created algorithms for the system that were trained on 13 Nvidia DGX systems located at USPS data centers. Nvidia's DGX A100 systems, for reference, pack in five petaflops of compute power and cost just under $200,000. It is based on the same Ampere architecture found on Nvidia's consumer GeForce RTX 3000 series GPUs.

Read more
Algorithmic architecture: Should we let A.I. design buildings for us?
Generated Venice cities

Designs iterate over time. Architecture designed and built in 1921 won’t look the same as a building from 1971 or from 2021. Trends change, materials evolve, and issues like sustainability gain importance, among other factors. But what if this evolution wasn’t just about the types of buildings architects design, but was, in fact, key to how they design? That’s the promise of evolutionary algorithms as a design tool.

While designers have long since used tools like Computer Aided Design (CAD) to help conceptualize projects, proponents of generative design want to go several steps further. They want to use algorithms that mimic evolutionary processes inside a computer to help design buildings from the ground up. And, at least when it comes to houses, the results are pretty darn interesting.
Generative design
Celestino Soddu has been working with evolutionary algorithms for longer than most people working today have been using computers. A contemporary Italian architect and designer now in his mid-70s, Soddu became interested in the technology’s potential impact on design back in the days of the Apple II. What interested him was the potential for endlessly riffing on a theme. Or as Soddu, who is also professor of generative design at the Polytechnic University of Milan in Italy, told Digital Trends, he liked the idea of “opening the door to endless variation.”

Read more