Skip to main content

Is Clearview AI’s facial recognition legal? We need to figure it out soon

Facial Recognition Composite
No one seems to be able to figure out if what Clearview AI is doing is legal, a quandary that has exposed the messy patchwork of laws that allow exploitation of people’s personal data to profligate.

As reported first by CNET, Google and YouTube recently sent cease-and-desist letters to Clearview — the controversial law enforcement facial recognition technology — over its scraping of their sites for people’s information. Twitter sent one in January, while Facebook is said to be reviewing Clearview’s practices.

Recommended Videos

Meanwhile, Clearview AI founder Hoan Ton-That has made a claim that it’s his First Amendment right to collect public photos and use them in the way he has. He told CBS News in an interview that, essentially, spilling the data of millions of people to the police is totally legal. Scholars say that is both true and not true. What is clear is this technology isn’t going away, and these legal questions will need to sort themselves out somehow, and soon.

“There have been arguments that there is a First Amendment right to access publicly available information on the internet and scrape it,” said Matthew Kugler, associate professor at Northwestern University. But what happens to that data after you collect it is not necessarily protected.

“We don’t have a binding answer to that,” Kugler told Digital Trends. “It’s within the realm of plausible argument, but Clearview is on shaky ground if they want to say ‘I have this data and I can do whatever I want with it.’”

“It’s not surprising that the CEO of Clearview AI is weaponizing the First Amendment to justify his company’s practice of scraping online photos of people’s faces without their consent,” said Evan Selinger, a professor of philosophy and technology at the Rochester Institute of Technology. “What’s required to properly govern Clearview AI and so much else is a thoroughgoing reexamination of what private and public mean that shifts the key debates in law, ethics, design, and even everyday expectations. In short, it’s long overdue to acknowledge that people can have legitimate privacy interests in publicly shared information.”

A patchwork of laws

As of this writing, there are few, if any, federal statutes governing online privacy. What exists instead is a patchwork of state laws, including those of Virginia, Illinois — under whose law Facebook recently lost a $500 million lawsuit over its photo tagging and facial identification features — and California, which just recently enacted the strongest law so far in the union.

“I have a First Amendment right to expose the private data of millions of people” sounds crazy, but it’s legally tenable.

HiQ used a similar argument in its CFAA case vs. LinkedIn. https://t.co/u3IZMGyeyW

— Tiffany C. Li (@tiffanycli) February 4, 2020

A similar situation was litigated recently when LinkedIn sent a cease-and-desist to the startup hiQ, which sold data on people to their employers based on what hiQ could scrape from LinkedIn. The reasoning was, theoretically, to help businesses keep track of their workforces, Reuters reported at the time.

LinkedIn claimed that this violated its terms of use; hiQ, in turn, said it couldn’t run its business without being able to scrape LinkedIn’s data, setting up a fight between the First Amendment and the 1986 Computer Fraud and Abuse Act, which prohibits unauthorized computer access.

The case ran up against the same privacy versus tech issues now facing Clearview AI: Is it fair to take publicly available data, store it, repackage it, process it, and sell it, or is that a violation of privacy?

Basically, we haven’t yet figured that out. The LinkedIn case ultimately found that scraping is protected and legal, and hiQ is still in business. But, as Albert Gidari put it, this particular case with Clearview raises other tough issues, so it’s unlikely there will be a clear resolution regarding this conduct in the short term.

Gidari, the consulting director of privacy at the Center for Internet and Society at Stanford University, told Digital Trends via email that although Clearview asserts its right to scrape and use photos, “individuals also have a statutory right to their image.” For example: California prohibits unauthorized use of a person’s voice, image, or name for someone else’s benefit (which is clearly what Clearview does). “There is no doubt that these photos fall under the Illinois statute and probably under CCPA [the California law] as biometric use without consent,” he wrote in an email.

However, Clearview also asserts that its use of images is in the public interest, which could be allowed if a judge were to find that argument convincing.

The consequences of eating too much cake

Facial Recognition Composite
izusek/Getty Images

Chris Kennedy, the chief information and security officer for cybersecurity firm AttackIQ, told Digital Trends that these are all signs of a reckoning between the information buffet that we’ve been enjoying and the privacy ramifications we will soon have to face. “We live in an age of a growing distrust for technology,” he told Digital Trends. “The last 20 years, we’ve had our cake and ate it, too. We had free sharing of information, we enabled e-commerce, and now it’s just become the expectation that you’ll put yourself out there on the Internet. We paying the price now.”

Basically, Kennedy says, all the goodwill that was built up over the early years of the Internet, when people were having their informational cake and eating it as well, is starting to erode, partly because there are no clear rules to follow, and therefore no clear expectations as to what will happen to you on the Internet. That needs to change, he said.

Kennedy is certain we’re moving in a very pro-Clearview AI-direction; that is, the facial recognition genie is out of the technological bottle, and there’s no way to reel it back in.

“We can’t slow the pace of technology without significant cultural shifts … and enforceable laws,” he told Digital Trends. “It can’t be this toe in the water stuff like CCPA or GDPR [the name for European digital privacy laws]. It has to be, ‘this is how it is, these are the expectations in the management of your data and information, you must adhere to them or risk the consequences.’ It’s like when a hurricane comes. You leave, or you pay.”

Maya Shwayder
I'm a multimedia journalist currently based in New England. I previously worked for DW News/Deutsche Welle as an anchor and…
This modular Pebble and Apple Watch underdog just smashed funding goals
UNA Watch

Both the Pebble Watch and Apple Watch are due some fierce competition as a new modular brand, UNA, is gaining some serous backing and excitement.

The UNA Watch is the creation of a Scottish company that wants to give everyone modular control of smartwatch upgrades and repairs.

Read more
Tesla, Warner Bros. dodge some claims in ‘Blade Runner 2049’ lawsuit, copyright battle continues
Tesla Cybercab at night

Tesla and Warner Bros. scored a partial legal victory as a federal judge dismissed several claims in a lawsuit filed by Alcon Entertainment, a production company behind the 2017 sci-fi movie Blade Runner 2049, Reuters reports.
The lawsuit accused the two companies of using imagery from the film to promote Tesla’s autonomous Cybercab vehicle at an event hosted by Tesla CEO Elon Musk at Warner Bros. Discovery (WBD) Studios in Hollywood in October of last year.
U.S. District Judge George Wu indicated he was inclined to dismiss Alcon’s allegations that Tesla and Warner Bros. violated trademark law, according to Reuters. Specifically, the judge said Musk only referenced the original Blade Runner movie at the event, and noted that Tesla and Alcon are not competitors.
"Tesla and Musk are looking to sell cars," Reuters quoted Wu as saying. "Plaintiff is plainly not in that line of business."
Wu also dismissed most of Alcon's claims against Warner Bros., the distributor of the Blade Runner franchise.
However, the judge allowed Alcon to continue its copyright infringement claims against Tesla for its alleged use of AI-generated images mimicking scenes from Blade Runner 2049 without permission.
Alcan says that just hours before the Cybercab event, it had turned down a request from Tesla and WBD to use “an icononic still image” from the movie.
In the lawsuit, Alcon explained its decision by saying that “any prudent brand considering any Tesla partnership has to take Musk’s massively amplified, highly politicized, capricious and arbitrary behavior, which sometimes veers into hate speech, into account.”
Alcon further said it did not want Blade Runner 2049 “to be affiliated with Musk, Tesla, or any Musk company, for all of these reasons.”
But according to Alcon, Tesla went ahead with feeding images from Blade Runner 2049 into an AI image generator to yield a still image that appeared on screen for 10 seconds during the Cybercab event. With the image featured in the background, Musk directly referenced Blade Runner.
Alcon also said that Musk’s reference to Blade Runner 2049 was not a coincidence as the movie features a “strikingly designed, artificially intelligent, fully autonomous car.”

Read more
Apple TV+ just got a price slash that’s tough to resist, and it won’t last long
The Apple TV main screen.

Apple has just quietly announced that it will be slashing the price on its Apple TV+ offering for a limited time deal.

While Apple prices the service at a standard $9.99 per month usually, it has just cut that way down to $2.99 per month.

Read more