Want to share with your friends too?

16 Jun 2017

New York Times to allow comments, moderated via Google-backed AI

New York Times gives back "comments" to readers

The Internet, in some cases, is more of incessant babbling than coherent speeches; which is why several news outlets deny "comments" on articles, fearing it would become a place that has more trolls than a proper debate.

However, thanks to Google's AI engine, New York Times is opening up the channel for comments and discussion again.

Here's all about it.

In context

New York Times gives back "comments" to readers
"Trolls" have no place in serious journalism


"Trolls" have no place in serious journalism

If anyone ever takes a quick look at the comments section of any online news platform's article, chances are one will find more comments on how to make money quickly and a plethora of disparaging comments or "trolls" that have nothing to do with the article at hand.

To stop this tomfoolery, news outlets have stopped allowing even the genuine readers from making comments.


How will the new technology help?

New York Times has recently opened up the comments section, which is now equipped with a moderator mechanism that uses machine learning developed by Alphabet's subsidiary Jigsaw to keep a check on online trolls.

Until now NYT had allowed comments on only 10% of its articles, as the 12,000 comments that it gathered every day had to be checked by 14 human moderators.

Love Tech news?

Stay updated with the latest happenings.

Notify Me

More online discussion, sharing of ideas


More online discussion, sharing of ideas

New York Times will now allow free flow of comments on 25% of its articles, which will be screened at a lightning speed by the moderators, using the new artificial intelligence tools that have been made available to them.

NYT said that it's planning to allow comments, which would ensure healthy online discussion, on almost 80% of its articles, in the near future.


We work for the people not for the trolls

Jigsaw's Perspective tool works via a training software, which teaches it to detect comments that are toxic in nature and then the tool filters them out, to make way for a proper online discussion by readers who are truly invested in the stories.

This brings back a proper discourse and sharing of ideas, something that journalists try to strive through their articles every day.

Ask NewsBytes
User Image

Next Timeline