Amazon announced it would no longer allow police to use its Rekognition facial recognition technology, and IBM pledged it would back away from doing any further development of the technology.
The moves came as the tools have been criticized for inaccuracies and potential misuse as a mass surveillance technique.
Amazon wrote in its announcement that its decision was directly inspired by activists’ push to ban police from using facial recognition. In IBM’s statement, the company wrote it “firmly opposes and will not condone uses of any technology, including facial recognition technology … for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency.
But some cybersecurity experts aren’t sold — and the decisions will likely do little to slow the flood of facial recognition technology in the future.
“This just opens up a whole bunch of questions, like, have they been involved with mass surveillance software before?” asked David Harding, the chief technical officer of the cybersecurity company ImageWare. “Why didn’t it violate their trust and transparency principles prior to this? It seems to be very odd. There’s a lot to unpack here.”
George Brostoff, the CEO of 3D face-scanning technology firm SensibleVision, told Digital Trends that he believes there may have been an anterior motive to the decisions.
“As in many things, IBM announcement is likely more complex than it appears on the surface,” he said, suggesting that it was unlikely IBM had much stake in facial recognition in the first place.
IBM did not respond to a request for comment as to how much such a move would affect their bottom line, or how much they had been investing in facial recognition research and development.
Both Brostoff and Harding noted that IBM is not necessarily a noted government contractor in the world of facial surveillance.
“A lot of the companies that are doing [facial recognition technology] at the government scale are companies no one’s heard of,” said Ben Goodman, senior vice president of cybersecurity firm ForgeRock. “These companies don’t have to worry about reputational risk. Amazon and IBM are big names, they have more at risk.”
Goodman questioned what kind of return a company such as Amazon was getting right now on facial recognition technology development.
“They really need to think about whether it’s worthwhile to impede people’s privacy. Are you getting enough of a return on that,” he said. “Look at Clearview A.I. [the FRT company that was revealed to be scraping social media and selling software to law enforcement], which brazenly talked about what they were doing, and they were destroyed. I’m sure they’re not the only ones doing this, but that shows what the public’s mood is.”
And while facial surveillance has been pilloried for its notorious inaccuracy, rumored use as a police tool during protests, and potential violations of privacy rights, the latter is likely to start cropping up as a part of everyday life very soon.
Public versus private facial recognition technology
“The government should never have access to [facial recognition technology] and it’s not compatible with a democratic society,” said Saira Hussain, a staff attorney at the Electronic Frontier Foundation. “It infringes on our First and Fourth Amendment rights.”
But the proliferation of facial recognition technology in the private sphere might be inevitable as the technology becomes easier to use. Brostoff predicted that within two years, using FaceID or its equivalent would become standard practice for everyday tasks like checking out at a store or checking in at the airport.
When it comes to the private sphere, Hussain told Digital Trends that she fears corporations will develop this technology without considering the ethical implications. If, for example, the hotel industry starts adopting facial recognition technology, there must be a way for people to opt in, she said, rather than simply making it ubiquitous.
“There should always be a way for hotels to check in someone who doesn’t want to opt into the system,” she said.
Harold Li, vice president of ExpressVPN, told Digital Trends he envisioned the rollout as first being available as work applications for employees, “who are less inclined or able to reject this tech, whether it’s for clocking in, or having contractors verify their identities,” he wrote in an email.
This would be followed by consumer-directed applications — which are already in place in some countries.
“We’re already seeing this in less privacy-conscious parts of the world, such as China, where supermarkets and subway stations alike are enabling people to pay by scanning their face,” he wrote. “In Singapore, some trials for hotel check-in by face have begun as well. While this doesn’t seem to have made its way to the U.S. just yet, surveillance-heavy stores like Amazon Go may begin to normalize the trading off of privacy for convenience.”
“It’s sort of like the unstoppable force meeting the immovable object,” said Goodman. “Obviously there’s a spy factor and the creep factor and the privacy factor, but there’s also the convenience factor. It means I can go touchless in an airport. It means I can board a plane without putting my phone on a surface that ten other people just touched.”
An ethical system
If experts are right and the spread of facial recognition technology is inevitable, how do we make sure we get there without tearing through everyone’s privacy along the way?
“We shouldn’t forget that we are being tempted to give up privacy for perceived benefits,” said Gabrielle Hermier, media officer at Surfshark. “The questions of users’ privacy, consent, and FRT gender and race bias are central to the debate and should be addressed first. FRT vendors like Amazon or Microsoft and its users, including law enforcement agencies and airports, should share a responsibility to ensure that FRT is not biased.”
The question of bias is practically married to the question of FRT. As Tom Chivers of the United Kingdom-based ProPrivacy pointed out, “The potential for abuse is far too high. Studies into facial recognition have shown an 81% failure rate for face-matching,” he wrote in a message to DT, referring to what researchers in the U.K. found when they tested the Metropolitan Police’s facial recognition technology.
That study found the technology incorrectly identified innocent people at an astronomically high rate, according to SkyNews.
Li agreed consent and transparency are key. “Do not make such technology mandatory. Let users opt-in rather than opt-out,” he wrote, echoing Hussain’s concerns. He also said an ExpressVPN poll recently found that 68% of U.S. adults said they’re concerned with the growing ubiquity of facial recognition technology. Based on this alone, pushback and legal questions also are almost certainly inevitable.
“We are certainly not there yet law-wise,” Goodman said. “Sadly, our legal frameworks seem to be a lagging indicator of people’s emotions about this. You’ll probably need some compelling event to occur around FRT before anyone pays attention to it.”
At the end of the day, this is your face we’re talking about — and people will need to be able to control access to their own faces, advocates said.
“If there’s not a meaningful process by which people can opt-in and have the ability to make that decision for themselves, then it’s not ethical,” Hussain said. “When you’re talking about biometrics as private as your face, and can’t that cover up in public in the same way as you can your hands, that’s an erosion of privacy.”
- Microsoft won’t sell facial recognition technology to police
- Police facial recognition tech could misidentify people at protests, experts say
- Democratic lawmakers propose nationwide facial recognition ban
- IBM will no longer develop or research facial recognition tech
- Police are still using Microsoft’s high-tech surveillance system