The pandemic has proved facial recognition systems such as Apple’s Face ID are not built to recognize people under masks. Now, the National Institute of Standards and Technology, a government body responsible for accessing such systems, has backed it with more conclusive evidence and says it’s now exploring new models that are designed to handle masked faces.
In its latest study, the NIST has revealed that masks can significantly thwart facial recognition algorithms’ accuracy and raise the error rate to as much as 50% — even in the case of some of the best and widely used commercial platforms. The report adds that the systems performed worse when the mask was black in color and was worn higher up the nose area.
“With the arrival of the pandemic, we need to understand how face recognition technology deals with masked faces,” NIST computer scientist Mei Ngan in the report. “We have begun by focusing on how an algorithm developed before the pandemic might be affected by subjects wearing face masks.”
Unlike the controversial facial recognition systems that are being employed, for instance, to identify protesters, this NIST study focuses on one-to-one algorithms that try to match two photos of the same person. Such systems can be generally found at immigration checkpoints for passport verification or on modern phones.
Ngan adds that the team, for its next round of studies, is now examining new facial recognition tech that is engineered with masked faces in mind and one-to-many searches. “Later this summer, we plan to test the accuracy of algorithms that were intentionally developed with masked faces in mind,” she added.
Masked faces have been a topic of concern for federal agencies that depend on facial recognition tech. In a recent internal bulletin (via The Intercept), the U.S. Department of Homeland Security expressed worry over “potential impacts that widespread use of protective masks could have on security operations that incorporate face recognition systems.” Notably, the NIST study was drafted in collaboration with the Department of Homeland Security’s Science and Technology Directorate, and Customs and Border Protection.
However, the rampant use of facial recognition systems across the United States and the growth of controversial startups such as Clearview A.I. has faced pushback from both privacy advocates and tech giants. Over the last few months, several companies such as Microsoft and IBM have pledged to not invest facial recognition systems entirely.
- What the biggest tech companies are doing to make the 2020 election more secure
- Are deepfakes a dangerous technology? Creators and regulators disagree
- The digital switch that blocks all websites from selling your personal data
- MIT’s autonomous boat takes on Amsterdam’s vast canal network
- This breakthrough mask promises even more protection than the N95