Justice Department proposes rolling back protections for social media platforms

The U.S. Department of Justice (DOJ) has proposed rolling back the protections that social media platforms and tech companies have — a move that could make them legally responsible for what people post on their platforms.

These changes seek to make social media platforms like Facebook and Twitter better address content on their sites when it comes to what is acceptable and what should be taken down, according to the policy document released Wednesday.

Conservatives have long alleged that major tech companies are biased against their voices; tech giants like Facebook and Google have denied these claims.

The DOJ proposal specifically cites the protections of Section 230 in the Communications Decency Act of 1996, which prevent tech companies from being held civilly liable for content their users post.

“This expansive statutory interpretation, combined with technological developments, has reduced the incentives of online platforms to address illicit activity on their services and, at the same time, left them free to moderate lawful content without transparency or accountability,” the proposal reads. “The time has, therefore, come to realign the scope of Section 230 with the realities of the modern internet so that it continues to foster innovation and free speech but also provides stronger incentives for online platforms to address illicit material on their services.”

The proposal would scale back some of the protections these platforms have, including making them more responsible for third-party content and requiring them to be fair and consistent with what kind of content is taken down. Platforms would have to provide reasonable explanations for their decisions.

The DOJ has requested that companies no longer be allowed to remove “otherwise objectionable” content from their sites. Instead, tech companies would only be able to remove content if it was “obscene, lewd, lascivious, filthy, excessively violent, harassing” or if it violated federal law or promoted violence or terrorism.

The plan would strictly define Section 230’s “good faith” requirement, saying that platforms must have clear terms of use and must abide by those terms of use, and that any content that is removed must fit within the more stringent definition of what can be moderated. It also says that platforms must provide notice to the user explaining why their content was moderated.

@dole777/Unsplash

The DOJ’s legislative plan still has to go through Congress before it can be adopted. 

Earlier today, a group of Republican senators also introduced limitations on Section 230 via the Limiting Section 230 Immunity to Good Samaritans Act. The proposed bill would allow users who don’t believe that a platform is “operating in good faith,” by being inconsistent and unfair with what content is acceptable or taken down, to sue these companies for $5,000 plus attorneys’ fees. 

Both of these proposed limitations and changes come on the heels of President Donald Trump signing an executive order last month to remove the protections of Section 230 in the Communications Decency Act of 1996. 

Section 230 says: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Trump’s executive order resulted from Twitter attaching a fact-check message to Trump’s tweet about how a mail-in ballot system would promote voter fraud, and was seen by some critics as retaliation against tech companies that had moderated his comments.

Twitter told Digital Trends that they have nothing to share about their thoughts on the DOJ’s proposal. Digital Trends reached out to Facebook, Instagram, and YouTube for comment on the reported bill. We’ll update this story when we hear back.

Editors' Recommendations