top of page
Writer's pictureFahad H

Study Shows How Socialbots Can Infiltrate Twitter, Gain Followers & Influence

robot

More and more, we are counting on social media to be a reflection of reality. In many ways it has become the world’s focus group, a massive crowd-sourced data mine.

News organizations use it to take the pulse of public opinion; Nielsen sells a social TV ratings report; Major League Baseball has used Twitter to help pick its all-star teams.

And reality television shows vote people off proverbial islands based on votes from Facebook and Twitter, for Simon Cowell’s sake.

But what if we can’t trust the results? What if someone systematically tries to game the system?

That was the motivation behind fascinating recent research looking into how difficult it is to create automated social accounts — socialbots — that successfully infiltrate a social network.

And it turns out it’s not that hard, at least for the short term. The researchers, from a Brazilian university, targeted Twitter, creating 120 socialbots and turning them loose on the Twitterverse for a month.

Only 31% Suspended By Twitter

By the end of the experiment, 38 of the socialbots were caught and suspended by Twitter’s defenses against spammers and other bad actors. That means 69% escaped detection. The researchers also found that the large majority of the suspended accounts were created at the end of the account creation process. They theorized that Twitter’s defense mechanisms had been triggered by multiple  accounts being created from the same block of IP addresses.

The surviving fake accounts, however, were able to fool a significant number of Twitter users, receiving 4,999 follows from 1,952 different accounts. They also generated 2,218 message-based interactions with 1,187 users.

More than 20% gained more than 100 followers, which, according to a separate study, is more than 60% of all Twitter users.

The robots also managed to start gaining clout on Klout, with 20% having a Klout score above 35 (on the social influence rater’s 0 to 100 scale). The bot with the highest Klout rating hit 42, the authors noting that that’s higher than several well-known “academicians and social network researchers.”

Those academics likely aren’t very active on Twitter, but even so it’s worrisome that fake accounts were able to quickly acquire the trappings of social legitimacy.

How Did These Bots Do It?

The researchers’ socialbots were quite sophisticated. They were created using an open-source project called Realboy and set up to interact — by either tweeting, retweeting or following — with random levels of activity between one minute and two hours. Sixty were given female profile characteristics and 60 male. To simulate expected downtime of human users, the bots were not active between 10 p.m. and 9 a.m. Pacific time.

Tweets were generated either by copying and reposting messages from other Twitter users, drawn randomly from the Twitter stream, or using a language algorithm called a Markov generator, which produces results somewhere between semi-coherent and gibberish. Some sample tweets that Markov created for the bots: “I don’t have an error in it :)”, “What isn’t go in the morning! night y’all” and “end aids now, the marilyn chambers memorial film festival I’d fix health care continues to outpace much of nation’s issues move to the”.

So what worked? Not surprisingly, the most active bots drew the most followers, interaction and Klout score gains, although the authors noted that too much activity risks discovery of Twitter’s spam fighting defenses. More surprising to the researchers: That bots using the Markov generator, instead of only reposting random tweets, showed better engagement, indicating that people have touble distinguishing between accounts posting human-generated tweets and tweets created from a statistical model.

“This is possibly because a large fraction of tweets in Twitter are written in an informal, grammatically incoherent style, so that even simple statistical models can produce tweets with quality similar to those posted by humans in Twitter,” the authors wrote.

Another factor in socialbot success was target audience; the study finding that bots that went after a group of users who tweeted about a common interest (in this case software development) gained more followers, had more interaction and increased Klout scores than a random set of Twitter users.

However, the bots made the fewest inroads with a group of social connected users who also posted about software dev. That group — identified through social connections with jQuery creator John Resig (@jeresig, 150,000 followers) — was even more resistant than the random set, an indication that even robots have a hard time being accepted by the cool kids. The charts below — Group 1 is the random users; Group 2, the topic tweeters; Group 3, the socially connected — illustrate the results:

groups

So What Does It All Mean?

Certainly Twitter’s spam fighters are already aware of the issue.

Last October, Twitter estimated in its pre-IPO SEC filing that fewer than 5% of its active accounts were false or spammers, while noting that that number could be higher. If that’s still the case, the estimate is about 13 million of its 255 million active users. Perhaps this study will give them more insight about how to fight the battle.

And other researchers are joining the fight too. Last month, Indiana University at Bloomington announced the creation of BotOrNot, a tool to detect whether a Twitter account is human. The tool was developed during a $2 million study (funded by the U.S. Defense Department) about tech-based misinformation and deception campaigns, so clearly this is more than just an academic exercise.

“Part of the motivation of our research is that we don’t really know how bad the problem is in quantitative terms,” IUB informatics and computer science professor Fil Menczer said in a release. “Are there thousands of social bots? Millions? We know there are lots of bots out there, and many are totally benign. But we also found examples of nasty bots used to mislead, exploit and manipulate discourse with rumors, spam, malware, misinformation, political astroturf and slander.”

Comments


bottom of page