Facebook Has Created a Shadowy Bot Universe to Help Uncover Trolls and Scammers

Facebook is developing a bot universe to simulate the interactions of real users, potentially giving the platform a proactive method of combating trolls, scammers and rule-breakers.

This shadow version of the popular social networking platform—not visible on the surface level—can be used to investigate "social properties" of the companies suite of applications by forcing bots to fight and intentionally attempt to break systems or violate guidelines, researchers said this week.

In a new paper, the team outlined its Web-Enabled Simulation (WES), which is known internally as "WW" and being built to take advantage of real website infrastructure.

Researchers said bots are trained to behave like bad actors and set loose on hundreds of millions of lines of code to analyze the results and highlight security gaps.

"Facebook's WW simulation is a WES that uses bots that try to break the community standards in a safe isolated environment in order to test and harden the infrastructure that prevents real bad actors from contravening community standards," the academic team wrote in its paper.

The researchers said the WW will be "safely isolated" and used to probe potential privacy issues on the main platforms. "Because the WW bots are isolated from affecting real users, they can be trained to perform potentially privacy–violating actions on each other," the paper noted.

"Facebook's new WES is a computer simulation that models the behavior of communities of users on its platform," Hari Jackson, the chief technology officer (CTO) at techspert.io, who has a PhD in Computing from the University of Cambridge, told Newsweek today via email.

"It consists of a series of autonomous bots, performing actions in the Facebook system in the same way as human users. These bots 'see' the same platform that real users see, and are able to read (but not change) real-world data on the platform. However, they are not able to affect real users in any way; their actions are completely isolated from the real world," the technology expert added.

In examples, Facebook researchers said the WW can be used to simulate the actions of people trying to share illegal content to Facebook, giving developers more insights into what to look for. The team said bots can also be deployed to search for clusters of users sharing policy–violating content.

The bots, they added, could be trained with the intention of breaking the rules, for example trying to access another bot's "private photos"—surfacing any privacy bugs or areas of weaknesses.

Facebook has been contacted for comment on the project.

As reported by The Verge, users are very unlikely to encounter one of these bots while browsing profiles of friends or family, as Facebook is taking steps to isolate them from real users. Yet the paper suggests that in some cases "read only bots" may indeed interact with some user information.

"Bots must be suitably isolated from real users to ensure that the simulation, although executed on real platform code, does not lead to unexpected interactions between bots and real users," it said.

"Despite this isolation, in some applications bots will need to exhibit high end-user realism, which poses challenges for the machine learning approaches used to train them. In other applications where read only bots are applicable, isolation need not necessarily prevent the bots from reading user information and reacting to it, but these read only bots cannot take actions," the paper added.

"The point of all of this is to test the Facebook platform in various ways, in an autonomous manner and at a large scale," Jackson told Newsweek. "By simulating user behavior, Facebook hopes to automatically spot policy-violating content and the users who have generated it.

He added: "The simulation is also designed to model the behavior of bad actors on the platform, and to spot, and fix, vulnerabilities in the system that allow these bad actors to flourish."

Facebook logo
This “shadow” version of the popular social network - not visible on the surface level - will be used to investigate “social properties” of the companies suite of applications. Marc Piasecki/Getty

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer


Jason Murdock is a staff reporter for Newsweek. 

Based in London, Murdock previously covered cybersecurity for the International Business Times UK ... Read more

To read how Newsweek uses AI as a newsroom tool, Click here.
Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go