Coronavirus misinformation on Twitter will now have a warning label attached, the social media company announced Monday.
Tweets that contain “potentially harmful, misleading information related to COVID” will now be flagged with either a label directing users to more reliable information, or a warning that covers the original tweet entirely.
One label will encourage users to “get the facts about COVID-19.” Clicking on the label will lead users to a page curated by Twitter with additional information, or an “external trusted source.” Twitter says these labels will be applied to misleading information or disputed claims that could lead to a “moderate” risk of harm.
Disputed tweets that pose a “severe” risk of harm may be flagged with a warning label stating the post conflicts with guidance from public health experts, Twitter said. The warning hides the original Tweet itself with a message that says “Some or all of the content shared in the Tweet conflicts with guidance from public health experts regarding COVID-19.”
Users can still see the original tweet but need to click past the warning to do so. Embedded tweets may not display the label or warning, and users that are not signed in may also not see those labels.
Misleading tweets — not tweets with disputed information but those that intentionally mislead — will be removed if Twitter classifies the possibility of harm as severe.
Twitter says it is working with its existing fact-check partners to determine which tweets are misleading, disputed, or unverified. The company says that tweets that could lead to increased exposure or transmission will be reviewed first.
Twitter doesn’t expect the new labels to be a one-and-done tool as the long-standing fight against fake news meets the current global crisis. “We’ll learn a lot as we use these new labels, and are open to adjusting as we explore labeling different types of misleading information,” Twitter said in a blog post. “This process is ongoing and we’ll work to make sure these and other labels and warnings show up across Twitter.”
The coronavirus pandemic has brought an onslaught of fake news and conspiracy theories and social media platforms have been working reactively to fight unsafe posts. The World Health Organization met with several major tech companies, including Facebook, Twitter, and YouTube earlier this year, calling the misinformation an “infodemic.” Since then, Facebook has started notifying users that viewed posts proven false and removed the viral “Plandemic” video. YouTube launched fact-checking panels, and Twitter has also removed tweets suggesting the virus had a connection with 5G.
The new Twitter labels are rolling out today, but will also be applied to existing tweets.
- What the biggest tech companies are doing to make the 2020 election more secure
- What is Section 230? Inside the legislation protecting social media
- Technology is easier than ever to use — and it’s making us miserable
- Conspiracy theories already spreading ahead of Trump-Biden presidential debate
- Facebook won’t show health groups in its recommendations anymore