Two Google executives said Friday that bias in artificial intelligence is hurting already marginalized communities in America, and that more needs to be done to ensure that this does not happen. X. Eyeé, outreach lead for responsible innovation at Google, and Angela Williams, policy manager at Google, spoke at (Not IRL) Pride Summit, an event organized by Lesbians Who Tech & Allies, the world’s largest technology-focused LGBTQ organization for women, non-binary and trans people around the world.
Bias in algorithms IS NOT JUST A DATA PROBLEM. The choice to use AI can be biased, the way the algorithm learns can be biased, and the way users are impacted/interact with/perceive a system can reinforce bias! checkout @timnitGebru’s work to learn more!
— X. Eyeé ???????? (@TechWithX) June 26, 2020
Williams discussed the use of A.I. for sweeping surveillance, its role in over-policing, and its implementation for biased sentencing. “[It’s] not that the technology is racist, but we can code in our own unconscious bias into the technology,” she said. Williams highlighted the case of Robert Julian-Borchak Williams, an African American man from Detroit who was recently wrongly arrested after a facial recognition system incorrectly matched his photo with security footage of a shoplifter. Previous studies have shown that facial recognition systems can struggle to distinguish between different black people. “This is where A.I. … surveillance can go terribly wrong in the real world,” Williams said.
X. Eyeé also discussed how A.I. can help “scale and reinforce unfair bias.” In addition to the more quasi-dystopian, attention-grabbing uses of A.I., Eyeé focused on the way in which bias could creep into more seemingly mundane, everyday uses of technology — including Google’s own tools. “At Google, we’re no stranger to these challenges,” Eyeé said. “In recent years … we’ve been in the headlines multiple times for how our algorithms have negatively impacted people.” For instance, Google has developed a tool for classifying the toxicity of comments online. While this can be very helpful, it was also problematic: Phrases like “I am a black gay woman” were initially classified as more toxic than “I am a white man.” This was due to a gap in training data sets, with more conversations about certain identities than others.
There are no overarching fixes to these problems, the two Google executives said. Wherever problems are found, Google works to iron out bias. But the scope of potential places where bias can enter systems — from the design of algorithms to their deployment to the societal context under which data is produced — means that there will always be problematic examples. The key is to be aware of this, to allow such tools to be scrutinized, and for diverse communities to be able to make their voices heard about the use of these technologies.
- Google Smart Canvas gets deeper integration between apps
- Nvidia lowers the barrier to entry into A.I. with Fleet Command and LaunchPad
- Nvidia is renting out its A.I. Superpod platform for $90K a month
- Google’s LaMDA is a smart language A.I. for better understanding conversation
- How the USPS uses Nvidia GPUs and A.I. to track missing mail