Skip to main content

Google execs say we need a plan to stop A.I. algorithms from amplifying racism

 

Two Google executives said Friday that bias in artificial intelligence is hurting already marginalized communities in America, and that more needs to be done to ensure that this does not happen. X. Eyeé, outreach lead for responsible innovation at Google, and Angela Williams, policy manager at Google, spoke at (Not IRL) Pride Summit, an event organized by Lesbians Who Tech & Allies, the world’s largest technology-focused LGBTQ organization for women, non-binary and trans people around the world.

Recommended Videos

In separate talks, they addressed the ways in which machine learning technology can be used to harm the black community and other communities in America — and more widely around the world.

https://twitter.com/TechWithX/status/1276613096300146689

Williams discussed the use of A.I. for sweeping surveillance, its role in over-policing, and its implementation for biased sentencing. “[It’s] not that the technology is racist, but we can code in our own unconscious bias into the technology,” she said. Williams highlighted the case of Robert Julian-Borchak Williams, an African American man from Detroit who was recently wrongly arrested after a facial recognition system incorrectly matched his photo with security footage of a shoplifter. Previous studies have shown that facial recognition systems can struggle to distinguish between different black people. “This is where A.I. … surveillance can go terribly wrong in the real world,” Williams said.

X. Eyeé also discussed how A.I. can help “scale and reinforce unfair bias.” In addition to the more quasi-dystopian, attention-grabbing uses of A.I., Eyeé focused on the way in which bias could creep into more seemingly mundane, everyday uses of technology — including Google’s own tools. “At Google, we’re no stranger to these challenges,” Eyeé said. “In recent years … we’ve been in the headlines multiple times for how our algorithms have negatively impacted people.” For instance, Google has developed a tool for classifying the toxicity of comments online. While this can be very helpful, it was also problematic: Phrases like “I am a black gay woman” were initially classified as more toxic than “I am a white man.” This was due to a gap in training data sets, with more conversations about certain identities than others.

There are no overarching fixes to these problems, the two Google executives said. Wherever problems are found, Google works to iron out bias. But the scope of potential places where bias can enter systems — from the design of algorithms to their deployment to the societal context under which data is produced — means that there will always be problematic examples. The key is to be aware of this, to allow such tools to be scrutinized, and for diverse communities to be able to make their voices heard about the use of these technologies.

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Google just gave vision to AI, but it’s still not available for everyone
Gemini Live App on the Galaxy S25 Ultra broadcast to a TV showing the Gemini app with the camera feature open

Google has just officially announced the roll out of a powerful Gemini AI feature that means the intelligence can now see.

This started in March as Google began to show off Gemini Live, but it's now become more widely available.

Read more
This modular Pebble and Apple Watch underdog just smashed funding goals
UNA Watch

Both the Pebble Watch and Apple Watch are due some fierce competition as a new modular brand, UNA, is gaining some serous backing and excitement.

The UNA Watch is the creation of a Scottish company that wants to give everyone modular control of smartwatch upgrades and repairs.

Read more
Tesla, Warner Bros. dodge some claims in ‘Blade Runner 2049’ lawsuit, copyright battle continues
Tesla Cybercab at night

Tesla and Warner Bros. scored a partial legal victory as a federal judge dismissed several claims in a lawsuit filed by Alcon Entertainment, a production company behind the 2017 sci-fi movie Blade Runner 2049, Reuters reports.
The lawsuit accused the two companies of using imagery from the film to promote Tesla’s autonomous Cybercab vehicle at an event hosted by Tesla CEO Elon Musk at Warner Bros. Discovery (WBD) Studios in Hollywood in October of last year.
U.S. District Judge George Wu indicated he was inclined to dismiss Alcon’s allegations that Tesla and Warner Bros. violated trademark law, according to Reuters. Specifically, the judge said Musk only referenced the original Blade Runner movie at the event, and noted that Tesla and Alcon are not competitors.
"Tesla and Musk are looking to sell cars," Reuters quoted Wu as saying. "Plaintiff is plainly not in that line of business."
Wu also dismissed most of Alcon's claims against Warner Bros., the distributor of the Blade Runner franchise.
However, the judge allowed Alcon to continue its copyright infringement claims against Tesla for its alleged use of AI-generated images mimicking scenes from Blade Runner 2049 without permission.
Alcan says that just hours before the Cybercab event, it had turned down a request from Tesla and WBD to use “an icononic still image” from the movie.
In the lawsuit, Alcon explained its decision by saying that “any prudent brand considering any Tesla partnership has to take Musk’s massively amplified, highly politicized, capricious and arbitrary behavior, which sometimes veers into hate speech, into account.”
Alcon further said it did not want Blade Runner 2049 “to be affiliated with Musk, Tesla, or any Musk company, for all of these reasons.”
But according to Alcon, Tesla went ahead with feeding images from Blade Runner 2049 into an AI image generator to yield a still image that appeared on screen for 10 seconds during the Cybercab event. With the image featured in the background, Musk directly referenced Blade Runner.
Alcon also said that Musk’s reference to Blade Runner 2049 was not a coincidence as the movie features a “strikingly designed, artificially intelligent, fully autonomous car.”

Read more