Facebook, Google, and Twitter are facing a lawsuit from the father of a woman killed in the Paris terror attacks that rocked France last November.
Reynaldo Gonzalez’s suit claims the companies “knowingly permitted” the Islamic State (IS) group to use its tools to attract members, raise funds, and disseminate “extremist propaganda,” the Associated Press (AP) reported.
Soon after the Paris atrocity, which took the lives of 130 people – including Gonzalez’s daughter Nohemi – French officials met with executives from the three companies, as well as Apple and Microsoft, to discuss ways to combat terrorist propaganda online. More recently, leading tech CEOs in the U.S. met with White House officials to debate similar issues.
The meetings were called amid increasing concern over terrorist groups’ use of social media services to spread their ideology and radicalize users in a bid to attract new members.
But for Gonzalez, any moves to tackle the issue have come too late. Filed on Tuesday in the U.S. District Court in the Northern District of California, his suit accuses the three social media giants of breaking the law by providing “material support” to the terrorists.
As noted by the AP, U.S. law appears to state that internet firms cannot be held accountable for content that users post on their various services. However, in this case, the suit is focusing not on posted content but on the behavior the firms allegedly enabled.
In an email to the AP, lawyer Ari Kresch, who is on the Gonzalez legal team, said the suit was about Facebook, Google, and Twitter “allowing IS to use their social media networks for recruitment and operations” and not about the content of the posted messages.
Defending its current methods for dealing with such matters, Facebook said that it immediately contacts law enforcement whenever it comes across information pointing to “a threat of imminent harm or a terror attack.”
As for Google, the Mountain View company insisted it has “clear policies prohibiting terrorist recruitment and content intending to incite violence and quickly remove videos violating these policies when flagged by our users.”
Twitter, meanwhile, said it has teams in place monitoring the service for violating conduct, and also investigates reports of rule violations, adding that it contacts law enforcement “when appropriate.” In 2015, Twitter’s efforts to eradicate terror groups from its service resulted in threats targeting Twitter co-founder and current CEO Jack Dorsey, as well as regular employees at the company.
An indication of the size of the task facing social media companies in dealing with terror-related content came earlier this year when Twitter said that in 2015 it shuttered more than 125,00 accounts “for threatening or promoting terrorist acts, primarily related to IS.”
- Here’s what social media giants are doing to keep extremism off your screen
- Governments are stepping in to regulate social media, but there may be a better way
- Social giants to testify before Congress on extremist content
- Social Feed: Embeds might be iIllegal, Vimeo adds simultaneous live-streams
- 9 things you need to know about the Russian social media election ads