Skip to main content

Justice Department proposes rolling back protections for social media platforms

 

The U.S. Department of Justice (DOJ) has proposed rolling back the protections that social media platforms and tech companies have — a move that could make them legally responsible for what people post on their platforms.

Recommended Videos

These changes seek to make social media platforms like Facebook and Twitter better address content on their sites when it comes to what is acceptable and what should be taken down, according to the policy document released Wednesday.

Conservatives have long alleged that major tech companies are biased against their voices; tech giants like Facebook and Google have denied these claims.

The DOJ proposal specifically cites the protections of Section 230 in the Communications Decency Act of 1996, which prevent tech companies from being held civilly liable for content their users post.

“This expansive statutory interpretation, combined with technological developments, has reduced the incentives of online platforms to address illicit activity on their services and, at the same time, left them free to moderate lawful content without transparency or accountability,” the proposal reads. “The time has, therefore, come to realign the scope of Section 230 with the realities of the modern internet so that it continues to foster innovation and free speech but also provides stronger incentives for online platforms to address illicit material on their services.”

The proposal would scale back some of the protections these platforms have, including making them more responsible for third-party content and requiring them to be fair and consistent with what kind of content is taken down. Platforms would have to provide reasonable explanations for their decisions.

The DOJ has requested that companies no longer be allowed to remove “otherwise objectionable” content from their sites. Instead, tech companies would only be able to remove content if it was “obscene, lewd, lascivious, filthy, excessively violent, harassing” or if it violated federal law or promoted violence or terrorism.

The plan would strictly define Section 230’s “good faith” requirement, saying that platforms must have clear terms of use and must abide by those terms of use, and that any content that is removed must fit within the more stringent definition of what can be moderated. It also says that platforms must provide notice to the user explaining why their content was moderated.

@dole777/Unsplash

The DOJ’s legislative plan still has to go through Congress before it can be adopted. 

Earlier today, a group of Republican senators also introduced limitations on Section 230 via the Limiting Section 230 Immunity to Good Samaritans Act. The proposed bill would allow users who don’t believe that a platform is “operating in good faith,” by being inconsistent and unfair with what content is acceptable or taken down, to sue these companies for $5,000 plus attorneys’ fees. 

Both of these proposed limitations and changes come on the heels of President Donald Trump signing an executive order last month to remove the protections of Section 230 in the Communications Decency Act of 1996. 

Section 230 says: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Trump’s executive order resulted from Twitter attaching a fact-check message to Trump’s tweet about how a mail-in ballot system would promote voter fraud, and was seen by some critics as retaliation against tech companies that had moderated his comments.

Twitter told Digital Trends that they have nothing to share about their thoughts on the DOJ’s proposal. Digital Trends reached out to Facebook, Instagram, and YouTube for comment on the reported bill. We’ll update this story when we hear back.

Allison Matyus
Former Digital Trends Contributor
Allison Matyus is a general news reporter at Digital Trends. She covers any and all tech news, including issues around social…
Google just gave vision to AI, but it’s still not available for everyone
Gemini Live App on the Galaxy S25 Ultra broadcast to a TV showing the Gemini app with the camera feature open

Google has just officially announced the roll out of a powerful Gemini AI feature that means the intelligence can now see.

This started in March as Google began to show off Gemini Live, but it's now become more widely available.

Read more
This modular Pebble and Apple Watch underdog just smashed funding goals
UNA Watch

Both the Pebble Watch and Apple Watch are due some fierce competition as a new modular brand, UNA, is gaining some serous backing and excitement.

The UNA Watch is the creation of a Scottish company that wants to give everyone modular control of smartwatch upgrades and repairs.

Read more
Tesla, Warner Bros. dodge some claims in ‘Blade Runner 2049’ lawsuit, copyright battle continues
Tesla Cybercab at night

Tesla and Warner Bros. scored a partial legal victory as a federal judge dismissed several claims in a lawsuit filed by Alcon Entertainment, a production company behind the 2017 sci-fi movie Blade Runner 2049, Reuters reports.
The lawsuit accused the two companies of using imagery from the film to promote Tesla’s autonomous Cybercab vehicle at an event hosted by Tesla CEO Elon Musk at Warner Bros. Discovery (WBD) Studios in Hollywood in October of last year.
U.S. District Judge George Wu indicated he was inclined to dismiss Alcon’s allegations that Tesla and Warner Bros. violated trademark law, according to Reuters. Specifically, the judge said Musk only referenced the original Blade Runner movie at the event, and noted that Tesla and Alcon are not competitors.
"Tesla and Musk are looking to sell cars," Reuters quoted Wu as saying. "Plaintiff is plainly not in that line of business."
Wu also dismissed most of Alcon's claims against Warner Bros., the distributor of the Blade Runner franchise.
However, the judge allowed Alcon to continue its copyright infringement claims against Tesla for its alleged use of AI-generated images mimicking scenes from Blade Runner 2049 without permission.
Alcan says that just hours before the Cybercab event, it had turned down a request from Tesla and WBD to use “an icononic still image” from the movie.
In the lawsuit, Alcon explained its decision by saying that “any prudent brand considering any Tesla partnership has to take Musk’s massively amplified, highly politicized, capricious and arbitrary behavior, which sometimes veers into hate speech, into account.”
Alcon further said it did not want Blade Runner 2049 “to be affiliated with Musk, Tesla, or any Musk company, for all of these reasons.”
But according to Alcon, Tesla went ahead with feeding images from Blade Runner 2049 into an AI image generator to yield a still image that appeared on screen for 10 seconds during the Cybercab event. With the image featured in the background, Musk directly referenced Blade Runner.
Alcon also said that Musk’s reference to Blade Runner 2049 was not a coincidence as the movie features a “strikingly designed, artificially intelligent, fully autonomous car.”

Read more