IBM will no longer develop or research facial recognition tech

IBM CEO Arvind Krishna says the company will no longer develop or offer general-purpose facial recognition or analysis software. In a June 8 letter addressed to Congress and written in support of the Justice in Policing Act of 2020, Krishna advocates for new reforms that support the responsible use of technology — and combat systematic racial injustice and police misconduct.

“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” wrote Krishna in the letter.

Krishna, who took over the chief executive role in April, added that it’s time for Congress to begin a national dialogue on the implications of facial recognition technology and how it “should be employed by domestic law enforcement agencies.”

The CEO also voiced his concerns regarding racial bias that is often found in artificial intelligence systems today. Krishna further called the need for more oversight to audit artificial intelligence tools, especially when they’re used in law enforcement and national policies that “bring greater transparency and accountability to policing, such as body cameras and modern data analytics techniques.”

People familiar with the matter told CNBC that the death of George Floyd, a Black man, while in the custody of Minneapolis police and the attendant focus on police reform and racial inequity convinced IBM to shut down its facial recognition products.

Over the last few years, facial recognition systems have dramatically advanced thanks to developments in fields such as machine learning. However, without any official oversight in place, they’ve been largely allowed to run unregulated and violate user privacy. Most notably, facial recognition tech was brought to the forefront of the national conversation by a startup called Clearview AI that was able to build a database of more than 3 billion images primarily scraping social media sites. Clearview has since faced a backlash from companies such as Twitter and is currently dealing with a myriad of privacy lawsuits.

Clearview AI is also reportedly being employed by law enforcement agencies in the ongoing Black Lives Matter protests across the U.S. Experts have argued that these systems can misidentify people, as they’re largely trained using white male faces.

Krishna didn’t say whether the company would reconsider its decision if and when Congress introduces new laws to bring more scrutiny to technology such as facial recognition. We’ve reached out to IBM and will update this story when we hear back.

Editors' Recommendations