In recent years, Twitter has maintained an ongoing campaign to curb the spread of bullying across its platform; years ago, its first major move was setting up its own anti-harassment team to police abuse across the popular social network. Dubbed the Twitter Trust and Safety Council, the service arose after then-CEO Dick Costolo opined that “we (Twitter) suck at dealing with abuse and trolls“.
Now, the social network is ready to proceed with an even more ambitious experiment. Going forward, a limited number of iOS users will receive interruptive warnings should the app detect that they’re about to fire off an offensive reply to an existing tweet. While the service naturally hasn’t expanded on what constitutes an ‘offensive’ tweet in the first place, it’s likely that the service is experimenting with profanity filters amongst other conditions marked as potentially abusive.
When things get heated, you may say things you don’t mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.
Notably, Twitter has promised that it won’t remove a tweet simply because it is offensive, However, the service noted that “people are allowed to post content, including potentially inflammatory content, as long as they’re not violating the Twitter Rules”.
Twitter hasn’t yet confirmed which regions the test will be limited to (if any), or if it has plans to expand the feature going forward.
While the concept is perhaps innovative in Twitter’s context as a microblogging and messaging platform, the approach isn’t novel. Instagram, which in turn is owned by Facebook, has deployed its own warning system which alerts a user if their intended reply is “similar to others that have been reported.”
What are your thoughts? Let us know in the comments below.