Page Loader
Now, Twitter warns against replying with mean, harmful tweets

Now, Twitter warns against replying with mean, harmful tweets

May 06, 2020
02:51 pm

What's the story

If you often lose your temper during Twitter arguments and end up saying mean things to others, be prepared to be reprimanded by the microblogging giant. The Jack Dorsey-led service has announced that it is testing a new feature that warns users to reconsider their offensive replies before they actually share them publicly. Here's all you need to know about it.

Feature

Prompt giving option to revise your reply

As Twitter conversations can get ugly, the service has started a 'limited experiment', where users who are about to tweet slurs, swear words, or demeaning/harmful/inflammatory language might be prompted to hold back and rethink what they are doing. Basically, the company says, it gives "you the option to revise your reply before it's published if it uses language that could be harmful."

Twitter Post

Here is Twitter's announcement

Clarification

No specific clarification on 'harmful' language

While the feature looks like a smart way of curbing hate on the microblogging network, it is not exactly clear what Twitter will consider as 'harmful' for showing the prompt to its users. Slurs and abuses would be a basic inclusion, but there might be other elements from the company's hate speech policies and broader rules that prohibit violent threats, dehumanizing speech, and more.

Caveat

Also, the prompt will not change your reply

It must also be noted that the prompt only encourages the user - who is about to retweet something harmful - to rethink their reply, perhaps like a light nudge. If the person is determined to share what they are about to, they could simply ignore the warning prompt and proceed. Twitter will not be able to do anything about that.

Action

However, it can take action after the tweet is shared

Notably, Twitter already has the power to ban/suspend/remove a user if their post hits an extreme and violates its rules and hate speech policies. However, if that is not the case and a person has shared something that is slightly offensive but not violating the company's policies, the company will not be able to take the posts or the account behind it down.

Information

Trial with iOS users

As of now, Twitter is running the limited experiment with select iOS users to gauge their feedback. If the public response is positive and the feature proves successful in curbing hate comments, the company might expand it to more users.