Skip to main content

IBM will no longer develop or research facial recognition tech

 

IBM CEO Arvind Krishna says the company will no longer develop or offer general-purpose facial recognition or analysis software. In a June 8 letter addressed to Congress and written in support of the Justice in Policing Act of 2020, Krishna advocates for new reforms that support the responsible use of technology — and combat systematic racial injustice and police misconduct.

Recommended Videos

“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” wrote Krishna in the letter.

Krishna, who took over the chief executive role in April, added that it’s time for Congress to begin a national dialogue on the implications of facial recognition technology and how it “should be employed by domestic law enforcement agencies.”

The CEO also voiced his concerns regarding racial bias that is often found in artificial intelligence systems today. Krishna further called the need for more oversight to audit artificial intelligence tools, especially when they’re used in law enforcement and national policies that “bring greater transparency and accountability to policing, such as body cameras and modern data analytics techniques.”

People familiar with the matter told CNBC that the death of George Floyd, a Black man, while in the custody of Minneapolis police and the attendant focus on police reform and racial inequity convinced IBM to shut down its facial recognition products.

Over the last few years, facial recognition systems have dramatically advanced thanks to developments in fields such as machine learning. However, without any official oversight in place, they’ve been largely allowed to run unregulated and violate user privacy. Most notably, facial recognition tech was brought to the forefront of the national conversation by a startup called Clearview AI that was able to build a database of more than 3 billion images primarily scraping social media sites. Clearview has since faced a backlash from companies such as Twitter and is currently dealing with a myriad of privacy lawsuits.

Clearview AI is also reportedly being employed by law enforcement agencies in the ongoing Black Lives Matter protests across the U.S. Experts have argued that these systems can misidentify people, as they’re largely trained using white male faces.

Krishna didn’t say whether the company would reconsider its decision if and when Congress introduces new laws to bring more scrutiny to technology such as facial recognition. We’ve reached out to IBM and will update this story when we hear back.

Shubham Agarwal
Shubham Agarwal is a freelance technology journalist from Ahmedabad, India. His work has previously appeared in Firstpost…
Google AI helped researchers win two Nobel Prizes this week
nobel peace prize

It's been another insane week in the world of AI. While Tesla CEO Elon Musk was debuting his long-awaited Cybercab this week (along with a windowless Robovan that nobody asked for), Google's AI was helping researchers win Nobel Prizes, Zoom revealed its latest digital assistant, and Meta sent its Facebook and Instagram chatbots to the U.K.

Check out these stories and more from this week's top AI headlines.
Google's AI helped researchers win two Nobel Prizes

Read more
New ‘poisoning’ tool spells trouble for AI text-to-image tech
Profile of head on computer chip artificial intelligence.

Professional artists and photographers annoyed at generative AI firms using their work to train their technology may soon have an effective way to respond that doesn't involve going to the courts.

Generative AI burst onto the scene with the launch of OpenAI’s ChatGPT chatbot almost a year ago. The tool is extremely adept at conversing in a very natural, human-like way, but to gain that ability it had to be trained on masses of data scraped from the web.

Read more
Medical health experts the latest to sound alarm over AI development
A digital brain on a computer interface.

An international group of doctors and medical health experts is the latest to call for artificial intelligence (AI) to be regulated, saying that it “poses a number of threats to human health and well-being,” and claiming that the “window of opportunity to avoid serious and potentially existential harms is closing.”

The analysis follows other recent warnings from prominent tech figures who include Geoffrey Hinton, the so-called “godfather of AI,” and a group of experts who were among 1,000 signatories of a letter that called for a suspension on AI development until a set of rules can be established to ensure its safe use.

Read more