Google’s autonomous cars continue to putter about the streets of California as the company tests the technology it hopes will one day transform the way we get from A to B.
So far the trials look to be going pretty well, with newly released data revealing a drop in the number of times a human driver has had to take over one of its self-driving cars while being tested on the streets.
A document filed recently with the California Department of Motor Vehicles revealed that in the 14-month period between September 2014 and November 2015, 49 of the Mountain View company’s self-driving cars experienced 341 disengagements while covering more than 424,000 miles. This means the vehicles’ on-board computers suddenly handed back control of the vehicle to the test driver, or the driver felt the need to intervene.
For 272 of the disengagements, reasons included unclear sensor data, issues with steering or braking, and problems with the car’s technology. For the other 69, the driver chose to take over after judging the car was about to perform an unexpected action or maneuver. Eighty-nine percent of all interventions took place on city streets, which makes sense considering the prevalence pedestrians, other vehicles, and intersections compared to out-of-town roads.
The results include both Google’s modified Lexus vehicles as well as its smaller so-called “koala” cars.
The number of miles driven per disengagement has risen sharply over the last year, Google’s report showed. For example, in the fourth quarter of 2015, its cars covered about 5,200 miles per disengagement, while in the same period a year earlier it was just 750 miles.
In a message posted on Tuesday, Chris Urmson, director of Google’s self-driving car program, said his team was also using “many other metrics and methodologies that will be useful for establishing our safety record over time.”
These, he wrote, include a test track where it runs “tests that are designed to give us extra practice with rare or wacky situations.” Urmson continued, “Our powerful simulator generates thousands of virtual testing scenarios for us; it executes dozens of variations on situations we’ve encountered in the real world by adjusting parameters such as the position and speed of our vehicle and of other road users around us.”
This, he explained, helps his team to understand how its self-driving technology would’ve handled the same situation under slightly different circumstances, describing it as “valuable preparation for a public road environment in which fractions of seconds can be of critical importance.”
Urmson ended his post by acknowledging his team still has much to do, saying, “Although we’re not quite ready to declare that we’re safer than average human drivers on public roads, we’re happy to be making steady progress toward the day we can start inviting members of the public to use our cars.”
- Waymo ditches the term ‘self-driving’ in apparent dig at Tesla
- Watch this Tesla drive from SF to LA with almost no intervention
- The best Top Gear episodes of all time
- Nuro’s cute robot delivery pod takes important step forward in California
- 2021 Tesla Cybertruck vs. 2021 Rivian R1T