Skip to main content

Google tests cars that drive themselves

The cars are using artificial-intelligence, as reported by The New York Times, to make themselves aware of obstcacles and respond as a normal driver would to road conditions.

The cars have always had a technician behind the wheel, in case the computer malfunctioned, but the test has had surprising results. Google drove seven cars for 1,000 miles with zero human assistance and 140,000 with occasional human intervention. There has been one accident — one of the test cars was rear-ended at a light.

As The New York Times reports, “Robot drivers react faster than humans, have 360-degree perception and do not get distracted, sleepy or intoxicated, the engineers argue. They speak in terms of lives saved and injuries avoided — more than 37,000 people died in car accidents in the United States in 2008. The engineers say the technology could double the capacity of roads by allowing cars to drive more safely while closer together. Because the robot cars would eventually be less likely to crash, they could be built lighter, reducing fuel consumption. But of course, to be truly safer, the cars must be far more reliable than, say, today’s personal computers, which crash on occasion and are frequently infected.”

The cars can be programmed to mimic our own driving styles. They can drive more safely or aggressively depending on what is input by the driver.

Google’s willingness to investigate future technologies is nothing new for the company. And while they may not have a clear business model in place for how to actually capitalize if this were to become public, it’s possible they could sell navigational software to compliment an autonomous vehicle.

Does this mean we’ll be seeing autonomously driven cars in the future? Right now it’s too early too say, but Google is committed to making the technology safe and pushing the boundaries of what man and machine can achieve.

Editors' Recommendations

Topics
Laura Khalil
Former Digital Trends Contributor
Laura is a tech reporter for Digital Trends, the editor of Dorkbyte and a science blogger for PBS. She's been named one of…
Google Drive vs. Dropbox: which is best in 2024?
Google Drive in Chrome on a MacBook.

Google Drive and Dropbox are two of the most popular cloud storage providers, if not some of the best. They offer a range of exciting features, from secure file storage and transfer, to free storage, file syncing, extensions, chat-app integration, and more. But while they might go toe to toe on some cloud storage specifications, there are others where one is the clear winner. The question is, which one is the best in 2024?

Let's take a close look at Google Drive and Dropbox to see how their latest head to head turns out.
Google Drive wins the free storage battle
Both Dropbox and Google Drive offer free storage space for those who would like to try out their respective services before putting down a few dollars a month for something more expansive and permanent. Google Drive comes standard, with 15GB of free space, far more than Dropbox's initial free storage offering of just 2GB.

Read more
I tested Intel’s XeSS against AMD FSR — and the results speak for themselves
Intel Arc demo: Ryan Shrout plays Shadow of the Tomb Raider on a gaming PC.

AMD's FidelityFX Super Resolution (FSR) and Intel's Xe Super Sampling (XeSS) are two of the most prominent upscaling options you'll find in PC games, and for one simple reason: They work with any of the best graphics cards. Choosing between them isn't simple, however. There are some big differences in image quality and performance, even with the same graphics card and the same game.

We've been testing AMD FSR and Intel XeSS for months across various games, but it's time to compare them point for point. If you're looking for a simple answer on which is best, you w0n't find it here. However, we'll still dig into the nuances between FSR and XeSS and what you need to know about the two upscaling features.
AMD FSR vs. Intel XeSS: how they work

Read more
Google’s AI just got ears
Gemini Advanced home page.

AI chatbots are already capable of "seeing" the world through images and video. But now, Google has announced audio-to-speech functionalities as part of its latest update to Gemini Pro. In Gemini 1.5 Pro, the chatbot can now "hear" audio files uploaded into its system and then extract the text information.

The company has made this LLM version available as a public preview on its Vertex AI development platform. This will allow more enterprise-focused users to experiment with the feature and expand its base after a more private rollout in February when the model was first announced. This was originally offered only to a limited group of developers and enterprise customers.

Read more