There is no question that facial-recognition technology is getting better. But what if a person tries to purposely obscure their identity by sporting a fake beard or giant sunglasses? Up until now, that has been a lot harder for even smart facial-recognition systems to deal with.
This is where new technology developed by researchers from India and the U.K. hopes to address. Engineers at India’s National Institute of Technology and Institute of Science and the U.K.’s University of Cambridge have developed a facial recognition framework that can identify even people who actively obscure their faces.
“This system can be used to identify a person even if they are disguised,” Amarjot Singh, from the University of Cambridge, told Digital Trends. “This can be used to identify criminals trying to disguise their appearance to avoid law enforcement. The problem of Disguise Face Identification (DFI) is an extremely challenging and interesting problem that is of great interest to law enforcement — as they can use this technology to identify criminals.”
The deep learning-based system works by identifying 14 key areas of the face, including 10 for the eyes, three for the lips, and one for the nose. It is capable of estimating these even when they are obscured in some way. It then compares these readings to images to find a match. In early tests, the results were 56 percent accurate at finding the right person when their face was covered with a hat or scarf — although this dropped to 43 percent when they also wear glasses. Those figures are not going to be considered evidence in a court of law anytime soon, but they could certainly help police narrow down a search.
“This, in my opinion, is the first AI-based work that solved the problem of DFI with a reasonable accuracy,” Singh continued. “The datasets developed by us were essential in solving this task. We hope that more researchers can use the proposed data set to develop strong AI models that can perform better on this task or can expand that dataset to include more disguises. Overall, this work will get the ball rolling.”
Next, he notes that the team is trying to get the technology to function in real time with less computational power. “After that, the next step would be to deploy it on cameras to see how well it performs,” he continued.
From a computer science perspective, it is impressive stuff. In terms of what it means for potentially authoritarian surveillance, we’re not convinced things are quite so clear-cut. In the meantime, a paper describing the work will be presented in October at the Institute of Electrical and Electronics Engineers International Conference on Computer Vision Workshop in Venice, Italy.
- Inside the rapidly escalating war between deepfakes and deepfake detectors
- Why teaching robots to play hide-and-seek could be the key to next-gen A.I.
- New A.I. can identify the song you’re listening to by reading your brain waves
- The future of augmented reality is earbuds, not eyeglasses
- What is Google Pay, and how do you use it?