Skip to main content

Facebook is training an army of malicious bots to research anti-spam methods

Despite Facebook’s many efforts, bad actors somehow always manage to seep through its safeguards and policies. The social network is now experimenting with a new way to buttress its anti-spam walls and preempt bad behaviors that could potentially breach its safeguards: An army of bots.

Facebook says it’s developing a new system of bots that can simulate bad behaviors and stress-test its platform to unearth any flaws and loopholes. These automated bots are trained and taught how to act like a real person utilizing the treasure trove of behavior models Facebook has acquired from its over two billion users.

Related Videos

To ensure this experiment doesn’t interfere with real users, Facebook has also built a sort of parallel version of its social network. Here, the bots are let loose and allowed to run rampant — they can message each other, comment on dummy posts, send friend requests, visit pages, and more. More importantly, these A.I. bots are programmed to simulate extreme scenarios such as selling drugs and guns to test how Facebook’s algorithms would try to prevent them.

Facebook claims this new system can host “thousands or even millions of bots.” Since it runs on the same code users actually experience, it adds that “the bots’ actions are faithful to the effects that would be witnessed by real people using the platform.”

“While the project is in a research-only stage at the moment, the hope is that one day it will help us improve our services and spot potential reliability or integrity issues before they affect real people using the platform.” wrote the project’s lead, Mark Harman in a blog post.

It’s unclear at the moment how effective Facebook’s new simulation environment will be. As Harman mentioned, it’s still in rather early stages and the company hasn’t put any of its outcomes to use for public-facing updates just yet. Over the last few years, the social network has actively invested and supported artificial intelligence-based research to develop new tools for fighting harassment and spam. At its annual developer conference two years ago, Mark Zuckerberg announced the company is building artificial intelligence tools for tackling posts that feature terrorist content, hate speech, spam, and more.

Editors' Recommendations

Elon Musk reportedly tweaked algorithm to boost his tweets
A digital image of Elon Musk in front of a stylized background with the Twitter logo repeating.

What can you do if Twitter fails to prioritize your tweets as much as you'd like it to?

Well, if you happen to own the social media platform, you could simply get someone to make a few phone calls to Twitter engineers and tell them to tweak the relevant algorithm.

Read more
Twitter expands tweet character limit massively
A lot of white Twitter logos against a blue background.

If you often find that 280 characters are too few for you to be able to effectively express yourself on Twitter, then perhaps 4,000 characters will suffice.

Beginning on Wednesday, Twitter now lets you post tweets with a maximum of 4,000 characters, 28.6 times more than the mere 140 characters available when Twitter launched in 2006, and 14.3 times more than the current limit of 240.

Read more
Yay! Twitter has just become less annoying
Twitter logo in white stacked on top of a blue stylized background with the Twitter logo repeating in shades of blue.

A couple of weeks after Twitter said it was working on it, the company has finally updated its Android and iPhone apps so that you once again return to the timeline that you were looking at last.

https://twitter.com/TwitterSupport/status/1622656511792271360

Read more