There’s no doubt that at one point Twitter had a spam problem. From hashtag jacking to auto-following spammers had their way with Twitter — until the spam killing system BotMaker was born. Upon launch of Twitter’s internal tool, the overall number of spam metrics dropped 40% from the pre-spam-terminator days.
Late yesterday, Twitter released a look into their spam crushing system that included the architecture, goals, challenges, rules and lessons they’ve learned from the experiment. The results are have been fascinating. BotMaker has three main goals:
Prevent spam content from ever being created
Reduce the amount of time spam appears on Twitter
Reduce the reaction to of new spam attacks
To achieve these goals, BotMaker runs off of a set of rules and takes actions according to the results. The system has two main variables: whether or not an action should be taken and the type of action that should be taken, if so. If a URL has been indicated as spammy and an action should be taken, the system can then say that it will deny that URL from being displayed.
There are also a handful of stages of when BotMaker is run including real-time (codename: Scarecrow), near real-time (codename: Sniper) and periodic jobs that aren’t imperative from a timing aspect. These stages will monitor data at varying latency and apply actions based on the specific set of rules.
An example of how a spam detection process within BotMaker would work is if a user’s posts with mentions obtain a high number of users blocking them, then they’ll be recorded as spam. So, if the number of those mentions being blocked are more than 1, then the spammerID is recorded and actions can be taken.
Since launching the system, Twitter saw a 55% drop in spam content being written and a 40% reduction in overall spam tactics (pictured above with time on the x-axis and spam volume on the y-axis).
For the full deep dive on the spam killing system, head over to the official Twitter post.
Comments