With fewer than 100 days before the U.S. presidential election, Microsoft announced it has developed a new way to combat disinformation on the internet, including a new system of detecting deepfakes — synthetic audio or video that mimics a real recording.
Microsoft said Tuesday it is launching the “Microsoft Video Authenticator,” which it says can analyze photos and videos to provide a confidence score about whether the image has been manipulated. The authenticator will both alert people if an image is likely fake or assure them when it’s authentic, Microsoft said.
“The fact that they [the deepfakes] are generated by A.I. that can continue to learn makes it inevitable that they will beat conventional detection technology,” the company said in a statement. “However, in the short run, such as the upcoming U.S. election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes.”
Microsoft said the new software was built in partnership with the Defending Democracy Program, which fights disinformation, protects voting, and secures campaigns.
Tech and privacy advocates have been sounding the alarm on the rise of deepfakes and its political implications for several years, as the technology has gotten noticeably harder to detect. Some companies have even started developing deepfake services, ostensibly for entertainment purposes.
- 2020 forced Big Social to address its flaws, but it’s too late for an easy fix
- What is Section 230? Inside the legislation protecting social media
- How artists and activists are using deepfakes as a force for good
- TikTok isn’t going anywhere, despite deadline for sale passing
- Ring rolls out end-to-end encryption for video to all customers