Skip to main content

IBM will no longer develop or research facial recognition tech

IBM CEO Arvind Krishna says the company will no longer develop or offer general-purpose facial recognition or analysis software. In a June 8 letter addressed to Congress and written in support of the Justice in Policing Act of 2020, Krishna advocates for new reforms that support the responsible use of technology — and combat systematic racial injustice and police misconduct.

“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” wrote Krishna in the letter.

Krishna, who took over the chief executive role in April, added that it’s time for Congress to begin a national dialogue on the implications of facial recognition technology and how it “should be employed by domestic law enforcement agencies.”

The CEO also voiced his concerns regarding racial bias that is often found in artificial intelligence systems today. Krishna further called the need for more oversight to audit artificial intelligence tools, especially when they’re used in law enforcement and national policies that “bring greater transparency and accountability to policing, such as body cameras and modern data analytics techniques.”

People familiar with the matter told CNBC that the death of George Floyd, a Black man, while in the custody of Minneapolis police and the attendant focus on police reform and racial inequity convinced IBM to shut down its facial recognition products.

Over the last few years, facial recognition systems have dramatically advanced thanks to developments in fields such as machine learning. However, without any official oversight in place, they’ve been largely allowed to run unregulated and violate user privacy. Most notably, facial recognition tech was brought to the forefront of the national conversation by a startup called Clearview AI that was able to build a database of more than 3 billion images primarily scraping social media sites. Clearview has since faced a backlash from companies such as Twitter and is currently dealing with a myriad of privacy lawsuits.

Clearview AI is also reportedly being employed by law enforcement agencies in the ongoing Black Lives Matter protests across the U.S. Experts have argued that these systems can misidentify people, as they’re largely trained using white male faces.

Krishna didn’t say whether the company would reconsider its decision if and when Congress introduces new laws to bring more scrutiny to technology such as facial recognition. We’ve reached out to IBM and will update this story when we hear back.

Editors' Recommendations

Shubham Agarwal
Shubham Agarwal is a freelance technology journalist from Ahmedabad, India. His work has previously appeared in Firstpost…
IBM claims its new processor can detect fraud in real time
Someone holding the IBM Telum chip.

At Hot Chips, an annual conference for the semiconductor industry, IBM showed of its new Telum processor, which is powering the next generation of IBM Z systems. In addition to eight cores and a massive amount of L2 cache, the processor features a dedicated A.I. accelerator that can detect fraud in real time.

IBM Telum Processor brings deep learning inference to enterprise workloads

Read more
Researchers disclose vulnerability in Windows Hello facial recognition
Close up of Windows Hello on a PC.

Researchers at the security firm CyberArk Labs have discovered a vulnerability in Microsoft's Windows Hello facial recognition system in Windows 10 and Windows 11. Calling it a "design flaw," the researchers say that hackers can get around Windows Hello by using a certain type of hardware to eventually gain access to your PC.

Though it isn't exactly something that is easily accomplished (and Microsoft says it has mitigated the vulnerability), there's a very specific set of conditions that can lead to the bypassing. In all cases, hackers would need to capture an IR image of the victim's face, have physical access to the victim's PC, and also use a custom USB device that can impersonate a camera. CyberArk Labs describe the six-part process on its website, with a video showing the proof-of-concept.

Read more
Facial recognition tech for bears aims to keep humans safe
A brown bear in Hokkaido, Japan.

If bears could talk, they might voice privacy concerns. But their current inability to articulate thoughts means there isn’t much they can do about plans in Japan to use facial recognition to identify so-called "troublemakers" among its community.

With bears increasingly venturing into urban areas across Japan, and the number of bear attacks on the rise, the town of Shibetsu in the country’s northern prefecture of Hokkaido is hoping that artificial intelligence will help it to better manage the situation and keep people safe, the Mainichi Shimbun reported.

Read more