In its latest attempt to improve the health of its platform, Twitter says it will be modifying the way conversations happen based on the behavior and conduct of users versus the actual content of their tweets.
The site will now use thousands of behavior signals when filtering search, replies and algorithmic recommendations, pushing tweets from offending users farther down the timeline, reports BuzzFeed.
Behavior signals that could result in content getting demoted include: users who tweet at a large number of accounts they don’t follow; the frequency a user has been blocked by people they interact with; if a user has created many accounts using a single IP address; and if an account is closely related to accounts that have violated Twitter’s terms of service.
According to BuzzFeed, the news was announced during a briefing at Twitter’s San Francisco headquarters earlier this week. CEO Jack Dorsey said many of the company’s past actions to monitor abuse have been content-based, but the company has been shifting more and more toward conduct and behaviors on the system.
BuzzFeed reports the changes will roll out this week and that the new behavior filters will be optional but turned on by default. There will be a “Show Everything” toggle in search where users can turn the filter on and off.
Twitter shared early test results around its new way of modifying conversations, claiming that the changes have led to an 8 percent drop in abuse reports on conversations and a 4 percent drop in abuse reports in search. Twitter said fewer than 1 percent of total accounts are responsible for the abuse reports it receives. During the briefing, Twitter Trust and Safety VP Del Harvey said that identifying these abusers — and decreasing their reach — could deliver big results.
Dorsey said that the latest modifications to conversations point to the biggest-impact change on the platform and that its a first step toward something he sees “… going quite far.”
We’ve reached out to Twitter for comment, but have not received a response.
The initiative is part of Twitter’s plan to improve the overall health of the platform. In February, the site released new policy restrictions to limit spam and bot activity, no longer permitting simultaneous posts with identical content across multiple accounts. During a livestream Q&A with users in March, Dorsey said that main question her team is trying to answer right now is, “How can we measure the health of the platform in a way that is public and accountable?”
Postscript: Twitter has published a blog post about the changes, outlining the steps it is taking to address what it defined as “troll” behavior. In addition to what BuzzFeed reported, Twitter said it was using policies, human review processes, and machine learning to manage how Tweets are organized in conversation and search. It also said one of the behavioral signals it would be paying attention to are accounts that do not have confirmed email addresses.
コメント