Skip to main content

Google CEO Sundar Pichai warns of dangers of A.I. and calls for more regulation

Citing concerns about the rise of deepfakes and the potential abuses of facial recognition technology, Google CEO Sundar Pichai declared in an op-ed in the Financial Times that artificial intelligence should be more tightly regulated: “We need to be clear-eyed about what could go wrong” with A.I., Pichai wrote.

The Alphabet and Google executive wrote about the positive developments that A.I. can bring, such as recent work by Google finding that A.I. can detect breast cancer more accurately than doctors, or Google’s project to use A.I. to more accurately predict rainfall in local areas. But he also warned that “history is full of examples of how technology’s virtues aren’t guaranteed” and that “[t]he internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread.”

To address these concerns, Pichai recommends developing regulatory proposals for the use of A.I., citing the need for regulations to be as international as possible in a globalized world. “To get there, we need agreement on core values,” he wrote. “Companies such as ours cannot just build promising new technology and let market forces decide how it will be used.”

In practical terms, he pointed to existing regulation such as Europe’s General Data Protection Regulation as a starting point for future legislation — despite the issues that this particular legislation has caused for Google in the past — and emphasized that rules around A.I. must take into account factors like safety and fairness when finding ways to balance the potential benefits and harms of technological developments.

Pichai raises valid concerns that many people are thinking about as technology including internet communications, machine learning, and algorithms play an increasingly prominent role in our lives. However, hearing the CEO of a company that has used A.I. to improve the accuracy of military drones and has targeted homeless people to build facial recognition features talk about ethics in this way is raising a few eyebrows.

Pichai also says Google wants to be “a helpful and engaged partner to regulators,” as they address this issue, offering “our expertise, experience, and tools as we navigate these issues together.” However, with big tech companies like Amazon already trying to draft its own legislation around facial recognition, inviting tech giants to take a significant role in regulating their own industry may not be the best way to curtail their ever-growing power.

Editors' Recommendations

Georgina Torbet
Georgina is the Digital Trends space writer, covering human space exploration, planetary science, and cosmology. She…
Google’s DeepMind A.I. defeats human opponents in Quake III Capture the Flag
deepmind logo

Google's DeepMind artificial intelligence lab surpassed another challenge with a computer program that was able to defeat human opponents in Quake III Arena's Capture the Flag mode.

This is not the first time that a DeepMind program proved to be capable of beating human players. In 2016, AlphaGo defeated Lee Sedol, the best Go player in the world, with a 4 to 1 score. Earlier this year, Google revealed that AlphaStar shut out two professional StarCraft players in a pair of five-game series.

Read more
Alexa and Google Home smart speakers bring A.I. to nearly one in three U.S. homes
Ring smart lights turned on outside home.

A new Consumer Technology Association (CTA) study found that 31% of American households now own smart speakers like the Amazon Echo and Google Home. If that percentage seems low, consider that only 8% of households owned smart speakers in 2016. The growth in smart speaker ownership has almost doubled in each of the last two years.

The CTA 21st Annual Consumer Technology Ownership and Market Potential Study surveyed 2,608 adults in the U.S. during the second week in March. The survey examined ownership and plans to buy nearly 60 tech products.

Read more
How emotion-tracking A.I. will change computing as we know it
Affectiva A.I. Emotion Tracking

With the exception of the occasional “Are you happy to continue with installation?” type pop-up message, computers haven’t classically cared much about how we feel.

That’s all set to change with the arrival of affective computing, the development of systems and devices that are able to recognize, interpret and respond accordingly to human emotions. With modern artificial intelligence breakthroughs having given us machines with significant IQ, a burgeoning group of researchers and well-funded startups now want to match this with EQ, used to describe a person’s ability to recognize the emotions of those around them.

Read more