Skip to main content

Google tests cars that drive themselves

The cars are using artificial-intelligence, as reported by The New York Times, to make themselves aware of obstcacles and respond as a normal driver would to road conditions.

The cars have always had a technician behind the wheel, in case the computer malfunctioned, but the test has had surprising results. Google drove seven cars for 1,000 miles with zero human assistance and 140,000 with occasional human intervention. There has been one accident — one of the test cars was rear-ended at a light.

As The New York Times reports, “Robot drivers react faster than humans, have 360-degree perception and do not get distracted, sleepy or intoxicated, the engineers argue. They speak in terms of lives saved and injuries avoided — more than 37,000 people died in car accidents in the United States in 2008. The engineers say the technology could double the capacity of roads by allowing cars to drive more safely while closer together. Because the robot cars would eventually be less likely to crash, they could be built lighter, reducing fuel consumption. But of course, to be truly safer, the cars must be far more reliable than, say, today’s personal computers, which crash on occasion and are frequently infected.”

The cars can be programmed to mimic our own driving styles. They can drive more safely or aggressively depending on what is input by the driver.

Google’s willingness to investigate future technologies is nothing new for the company. And while they may not have a clear business model in place for how to actually capitalize if this were to become public, it’s possible they could sell navigational software to compliment an autonomous vehicle.

Does this mean we’ll be seeing autonomously driven cars in the future? Right now it’s too early too say, but Google is committed to making the technology safe and pushing the boundaries of what man and machine can achieve.

Editors' Recommendations

Topics
Laura Khalil
Former Digital Trends Contributor
Laura is a tech reporter for Digital Trends, the editor of Dorkbyte and a science blogger for PBS. She's been named one of…
I tested Intel’s XeSS against AMD FSR — and the results speak for themselves
Intel Arc demo: Ryan Shrout plays Shadow of the Tomb Raider on a gaming PC.

AMD's FidelityFX Super Resolution (FSR) and Intel's Xe Super Sampling (XeSS) are two of the most prominent upscaling options you'll find in PC games, and for one simple reason: They work with any of the best graphics cards. Choosing between them isn't simple, however. There are some big differences in image quality and performance, even with the same graphics card and the same game.

We've been testing AMD FSR and Intel XeSS for months across various games, but it's time to compare them point for point. If you're looking for a simple answer on which is best, you w0n't find it here. However, we'll still dig into the nuances between FSR and XeSS and what you need to know about the two upscaling features.
AMD FSR vs. Intel XeSS: how they work

Read more
Google’s AI just got ears
Gemini Advanced home page.

AI chatbots are already capable of "seeing" the world through images and video. But now, Google has announced audio-to-speech functionalities as part of its latest update to Gemini Pro. In Gemini 1.5 Pro, the chatbot can now "hear" audio files uploaded into its system and then extract the text information.

The company has made this LLM version available as a public preview on its Vertex AI development platform. This will allow more enterprise-focused users to experiment with the feature and expand its base after a more private rollout in February when the model was first announced. This was originally offered only to a limited group of developers and enterprise customers.

Read more
Google quietly launches a new text-to-video AI app
A photo of Google Vids running with a sample timeline

Google quietly announced an AI-powered video creation app today. Called Google Vids, the new app is designed for Google Workspace users and uses the power of Google Gemini to help you create informational videos for the workspace.

Currently in testing with select Google Workspace Labs users (a public beta ispromised for later), the new online tool builds on some of the AI-powered features we've already seen in Google's other apps like Docs, Sheets, and Slides. The difference is that with Google Vids, you can manually create a video storyboard using your media or use AI to create one using basic words and simple prompts. This allows you to edit and put together much more informative videos in a short time.

Read more