Skip to main content

Google execs say we need a plan to stop A.I. algorithms from amplifying racism

 

Two Google executives said Friday that bias in artificial intelligence is hurting already marginalized communities in America, and that more needs to be done to ensure that this does not happen. X. Eyeé, outreach lead for responsible innovation at Google, and Angela Williams, policy manager at Google, spoke at (Not IRL) Pride Summit, an event organized by Lesbians Who Tech & Allies, the world’s largest technology-focused LGBTQ organization for women, non-binary and trans people around the world.

Recommended Videos

In separate talks, they addressed the ways in which machine learning technology can be used to harm the black community and other communities in America — and more widely around the world.

Please enable Javascript to view this content

https://twitter.com/TechWithX/status/1276613096300146689

Williams discussed the use of A.I. for sweeping surveillance, its role in over-policing, and its implementation for biased sentencing. “[It’s] not that the technology is racist, but we can code in our own unconscious bias into the technology,” she said. Williams highlighted the case of Robert Julian-Borchak Williams, an African American man from Detroit who was recently wrongly arrested after a facial recognition system incorrectly matched his photo with security footage of a shoplifter. Previous studies have shown that facial recognition systems can struggle to distinguish between different black people. “This is where A.I. … surveillance can go terribly wrong in the real world,” Williams said.

X. Eyeé also discussed how A.I. can help “scale and reinforce unfair bias.” In addition to the more quasi-dystopian, attention-grabbing uses of A.I., Eyeé focused on the way in which bias could creep into more seemingly mundane, everyday uses of technology — including Google’s own tools. “At Google, we’re no stranger to these challenges,” Eyeé said. “In recent years … we’ve been in the headlines multiple times for how our algorithms have negatively impacted people.” For instance, Google has developed a tool for classifying the toxicity of comments online. While this can be very helpful, it was also problematic: Phrases like “I am a black gay woman” were initially classified as more toxic than “I am a white man.” This was due to a gap in training data sets, with more conversations about certain identities than others.

There are no overarching fixes to these problems, the two Google executives said. Wherever problems are found, Google works to iron out bias. But the scope of potential places where bias can enter systems — from the design of algorithms to their deployment to the societal context under which data is produced — means that there will always be problematic examples. The key is to be aware of this, to allow such tools to be scrutinized, and for diverse communities to be able to make their voices heard about the use of these technologies.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Microsoft is killing this popular Word feature and replacing it with AI
Microsoft word document.

In a Microsoft Support blog post, the software giant announced the end of a helpful feature called Smart Lookup available in Word. It appears like an attempt to get users to use Microsoft's Copilot AI. The feature has been around since 2016, and it gives users definitions, relevant links, and synonyms directly inside of Word. Now, it's gone for good.

Nevertheless, if you right-click on a word and choose Search from the context menu, you will see only an empty search panel. Some users will see a message saying, "Sorry, something went wrong. Please try again," while others will see a blank space that never stops loading. Microsoft even removed the Smart Lookup feature from the standalone Office 2024 suite.

Read more
Humanoid robots to race against humans in first event of its kind
Humanoid robots running alongside humans.

 

There’ll be huffing and puffing, and probably a fair amount of creaking and clattering, too. We’re talking about the first-ever long-distance running race between robots and humans, which is set to take place in Beijing in April.

Read more
Microsoft is removing the VPN from Office 365, right after hiking prices
Microsoft Office free apps.

In a support document, Microsoft announced the end of a free VPN feature weeks after hiking prices for its monthly subscription for the first time in 12 years. The removal of the free VPN in Microsoft 365, which used the Defender app to hide IP addresses and encrypt internet traffic, is scheduled for February 28 of this year so Microsoft can shift its priorities.

Microsoft explained why it is removing the tool by saying, "Our goal is to ensure you and your family remain safer online. We routinely evaluate the usage and effectiveness of our features. As such, we are removing the privacy protection feature and will invest in new areas that will better align to customer needs." So, the software giant removed the feature to invest its efforts elsewhere.

Read more