Ask any celebrity what the biggest downside is of being famous and you will likely hear some variation regarding how difficult it is not to be able to go anywhere without being recognized. For better or worse (and there are plenty of instances in which both effects exist), having a famous face means continually attracting the attention of those around you.
To some extent, all of us are celebrities in 2019. We’re not all rich and famous, with personal stylists and screaming fans, but we are known in a way that would have been impossible in past decades. Social media encourages us to curate our lives, turning even something as humdrum as eating a meal into an envy-inspiring narrative to be “liked” by our “followers.” Almost all of us are discoverable using Google, nearly always accompanied by photos. And, just as it’s difficult for Robert Downey Jr. or Taylor Swift to walk down the street without being swarmed, increasingly our own faces will mean that we are identified tens, hundreds, or maybe even thousands of times each day.
Welcome to the world of facial recognition: in which our most identifiable and public-facing feature, our face, can be ID’d in a fraction of a second by the growing number of A.I.-equipped cameras around us, from security systems to those found on our smartphones. The simultaneous promise and threat is profound. As the academic Jenny Edkins observes in her book, Face Politics, it means that none of us is truly anonymous any more. “Our faces,” she writes, “will be decisively pinned to our identities and produced as available for discipline and control.” And a whole lot more, too.
The day that changed everything
On a seemingly ordinary Tuesday morning in Portland, Maine, two young men, one in his early thirties, the other a decade younger, passed through a security checkpoint in the airport. The moment was captured, as were hundreds of thousands of other moments that day, by surveillance video cameras. To the airport security team whose job it was to watch the footage, nothing appeared amiss about either man. No concerns were raised and both 33-year-old Mohamed Atta and 22-year-old Abdulaziz al-Omari proceeded, unimpeded, to catch their connecting flight.
To security experts, the grainy footage of the men remains one of the most terrible images recorded on September 11, 2001. Unlike the most widely viewed photographs taken on that day, it is not terrible because it depicts perhaps the most significant foreign attack on American soil. Rather, it is terrible because this is the point at which something could have been done to avoid it. Had the technology existed to identify Atta and al-Omari as possible terror suspects, airport security could have been alerted and thousands of lives may have been saved.
There are few more compelling elevator pitches for new technologies than this. While we can’t ever know whether a well-placed bit of smart facial recognition technology really could have stopped 9/11, it was enough to galvanize an entire industry. New levels of interest, primarily driven by security, combined with technological breakthroughs to trigger a wave of innovative new companies ready to capitalize. Almost two decades on, security remains one of the biggest markets for facial recognition.
If September 11, 2001 was a significant date because of facial recognition technology that didn’t exist, then September 12, 2017 was important because of what did.
“If an investigator has an image that comes from a camera or surveillance footage — the newest addition to this is footage taken on the street by passers-by with their smartphones — [we] can compare it to existing databases,” Elke Oberg, Marketing Manager for Cognitec Systems, a facial recognition company founded in 2002, told Digital Trends. “That could be booking databases, databases of people in prison, whatever people have available in that country [they can use.] They will then receive a list of candidates and look at the pictures and get their experts to decide if this will move their investigation forward.”
In addition to still images, Oberg said that modern facial recognition technology — including Cognitec’s — can now readily identify people in real-time on live video. “It’s no use to you if the search of the database takes a long time,” she continued. “The person will be gone. We recommend a database of around 10,000 people for this use-case to make sense.”
It is not just airports that are using similar technology. Facial recognition technology is used in all kinds of public spheres, including the outside world. Amazon, for instance, has sold its real-time “Rekognition” facial recognition technology to police in the U.S. In China, meanwhile, facial recognition has supposedly been used to pick a single suspect out of a packed 50,000-person crowd at a concert. And to identify jaywalkers and then send them an automated fine via text.
Picking out the good folks
But facial recognition isn’t just about picking out the bad guys. If September 11, 2001 was a significant date in the field’s history because of technology that didn’t exist, then September 12, 2017 was important because of what did. This was the date that Apple first showed off its iPhone X, the first iPhone users could unlock with Face ID, Apple’s marketing term to describe facial recognition.
“There’s definitely a shift that started with Apple using face toward commercial applications,” Oberg said. “The problem before the iPhone was that, yes, you were able to do facial recognition for device access, but you could easily spoof it. If you had an image or video [of the person, you could hack their device.] Apple was really able to find technology to make it truly safe.”
Face ID represents the flip side of the facial recognition coin. Along with the automated tagging of individuals on Facebook, it helped showcase consumer applications of facial recognition.
This is just the beginning. Imagine if all you had to do was to sit in the driver’s seat of a car that is driven by multiple people, and it would immediately recognize who you are and automatically adjust the car’s settings to reflect your personal profile. Or picture checking into a hotel and immediately being flagged as a frequent customer of the chain, even if you’ve never stayed in that particular branch before.
“It [can all be] done with cameras that are looking at your face, which you have pre-registered, either at home or right as you enter the airport.”
Or going to the bank and immediately finding that the teller has your name and account details, and has some idea of your most common reasons for visiting. There won’t be much need to imagine for too much longer. All of these are areas that companies are actively investigating.
“It’s kind of like a preferred customer program, but using biometrics instead of some other token like a card,” Oberg said.
She also gave the example of a seamless journey through the airport, no longer requiring people to endlessly line up to show their boarding card to get through security. “It [can all be] done with cameras that are looking at your face, which you have pre-registered, either at home or right as you enter the airport,” she said. “There’s then a small database which is kept, only of the people who are in the airport, and then you can complete your journey through the airport only using your face. When you board the airplane, your biometric data is removed. This development is just booming right now.”
And, of course, there’s advertising possibilities. In the 2002 movie Minority Report, video billboards adapt to show personalized ads to whoever is walking by. In real life, a number of firms have investigated similar technology. (Including, as we recently wrote about, a Japanese taxi firm.) While the focus is less on identifying people than on recognizing broad strokes like age, gender and even mood, these advertising boards could present different ads depending on who is viewing them at the time.
Is the tradeoff worth it?
Facial recognition is controversial. There’s no getting around it. Of all the available biometric technologies (and there are plenty of them), none carry the same baggage as automated facial recognition. Perhaps part of it is historical. Long before modern facial recognition allowed us to link faces with actual identities, nineteenth century researchers like psychiatrist Hugh Welcher Diamond and the eugenicist Francis Galton described their quasi-scientific theories on the facial indicators for everything from insanity to criminality. These biologically determinist views helped to justify plenty of racist and classist theories in the years that followed.
While a very different problem from the inexactness of some facial recognition tools today, this creeping undertone of prejudice is, for many people, evoked whenever we hear, for instance, of a complaint that a facial recognition system proves more likely to misclassify people of color than white people.
Perhaps the biggest concern is that facial recognition can, in theory, be used — and acted upon — whether we are aware of it or not. Recently, an 18-year-old from New York, Ousmane Bah, filed a $1 billion lawsuit against Apple over what he claims was a false arrest which took place because of the facial recognition technology in Apple Stores. The lawsuit noted that this, “is the type of Orwellian surveillance that consumers fear, particularly as it can be assumed that the majority of consumers are not aware that their faces are secretly being analyzed.” Apple responded to the lawsuit by saying that it does not use facial recognition in its stores.
If true, this would appear to damage Bah’s case. But it does not affect the wider point about facial recognition. As Kelly Gates, author of Our Biometric Future: Facial Recognition and the Culture of Surveillance, writes: “The prevalent myth of inevitability surrounding this … performs an important roles in their institutionalization, and … encourages public acquiescence.” In short, like Jeremy Bentham’s imagined eighteenth century prison, the Panopticon, in which prisoners do not see guards but assume they are watching, the threat of facial recognition regulates our behavior.
This may not remain the case, however. “It might be a generational issue,” Oberg said. “Young people don’t care, so long as it’s convenient and fast. They don’t think about privacy and data protection that much. The older generation are a little more careful and really want to know what happens to this data.”
Will a bevy of positive use cases be enough to offset people’s concerns about mass surveillance technology? Will the possibility of averting another 9/11-style terrorist attack be sufficient for people to agree to being publicly scanned wherever they go? Like the celebrity question of whether a person is happy to give up anonymity for the benefits that accompany being famous, it’s a tradeoff that will likely differ from person to person. “While there are more and more applications that are being considered, I think [there is a growing] awareness that you are giving away data that could possibly link you to other accounts or your social media presence,” Oberg said.
The best we can say for now is to keep watching this space. And in the meantime expect the space to be increasingly watching you back.
- San Francisco won the battle, but the war on facial-recognition has just begun
- How to make face unlock more secure in the Samsung Galaxy S10 line
- Apple may take Face ID to the next level, scan retinas for increased security
- Photography News: An upside-down camera, app secretly uses photos to train A.I.
- Japanese taxis will use facial recognition to target you with ads as you ride