The Home Affairs select committee alleged that instead of curbing the recruitment activity of terrorists on their respective platforms, the tech giants are instead “passing the buck.”
The panel attributed this intentional failure by the social media sites to their belief that taking action will “damage their brands.” The MPs warn that the sites are becoming “the Wild West” of the internet as a result of their policies.
“Huge corporations like Google, Facebook and Twitter, with their billion-dollar incomes, are consciously failing to tackle this threat,” said Labour MP Keith Vaz, head of the committee, “and passing the buck by hiding behind their supranational legal status, despite knowing that their sites are being used by the instigators of terror.”
The report follows a wave of violent attacks in Europe. The MPs also cited the case of notorious hate preacher Anjem Choudary, who was convicted of supporting Islamic State last week. During his trial it was revealed that U.K. authorities had contacted social media sites on numerous occasions to remove the content linked to Choudary, but not all of the requests were acted upon.
The report urges the web giants to boost cooperation with law enforcement agencies and respond to account removal requests immediately — a request that echoes the recent allegations against Facebook by German officials over data requests. The committee also appeals to the companies to publish quarterly statistics showing how many sites they have banned. Additionally, it advocates that a specialist police unit set up to monitor online terrorist activity be extended to work around the clock.
“These companies have teams of only a few hundred employees to monitor networks of billions of accounts and Twitter does not even proactively report extremist content to law enforcement agencies,” states the committee.
Representatives of the three tech firms in question met with the committee earlier this year. Despite those talks, the damning report still concluded that social media had become “the vehicle of choice in spreading propaganda and the recruiting platforms for terrorism.”
For their part, the internet giants claim that they have policies in place that allow authorities (and users) to flag and report extremist content. “We take our role in combating the spread of extremist material very seriously,” YouTube said in a statement.
Earlier this year, Facebook told Digital Trends that “terrorists and acts of terrorism have no place on Facebook.” A spokesperson for the company added: “Whenever terrorist content is reported we remove it as quickly as possible. We treat take-down requests by law enforcement with the highest urgency.”
Most recently, Twitter claimed it had suspended 365,000 accounts linked to promoting terrorism since the start of last year. Yet the online extremism issue continues to plague social media platforms, resulting in lawsuits, and repeated accusations.
The Home Affairs committee may not be able to sway the U.K. government to step in to take action, but its condemning report has dragged the topic back into the political arena where it could pick up renewed interest.
- Houseparty could be a digital privacy nightmare, experts warn
- What does it take to make a social media network that doesn’t exploit users?
- Congress has concerns over Ring’s partnerships with police departments
- Clearview AI’s client list was stolen. Could its massive face database be next?
- Instagram shuts down major meme account over coronavirus scam