Science

Facebook bot was never planning world domination. Calm down!

02 Aug 2017 | By Anish Chakraborty
Facebook bot was just a failed experiment

There have been some debates, recently, over the news that research was shut down at Facebook's artificial intelligence lab because one bot started conversing to another in an indecipherable language.

While major publications heralded it as the start of an AI led apocalypse, the actual incident that had taken place was far from what was written about it.

Here's what genuinely happened.

In context: Facebook bot was just a failed experiment

02 Aug 2017Facebook bot was never planning world domination. Calm down!

HaggleLet's haggle over it

In June, the social media giant had posted a blog on an interesting research that it had conducted.

The researchers were trying to see if they could get bots to negotiate over items.

They were observing how the bots would've handled conversations in a way so that, both parties could get what they wanted and both of them were satisfied with the transaction.

Love Tech news?
Stay updated with the latest happenings.
Making things easy to communicate

BotMaking things easy to communicate

The bots did exactly what they were asked to do. They haggled over fake objects. In the process, they ended up creating a modified way of communicating with each other.

Since they don't speak English and objects to them are just a string of numbers, they substituted letters for longer words. Just like how we use "x" as the unknown and solve math problems.

EnglishWhy did Facebook shut it down?

These rudimentary bots were just cutting corners by creating their own code to simplify the haggling process. Facebook wanted to make bots that conversed in English. Therefore, it decided to shut down this prototype and would create a bot that was smart enough to communicate in English.

Facebook researcher Dhruv Batra said that the AI agents inventing their own language was not that uncommon.

AIThis is not an extraordinary thing

To people, who are not familiar with the field, this may come across as an alarming situation but it was a "well-established sub-field of AI" and had publications on it that went back decades.

Slamming publications for their irresponsible coverage, Batra said, "Simply put, agents in environments attempting to solve a task will often find unintuitive ways to maximize reward."

That's about it.

FutureNow that we are here

If it was not clear by now, let me reiterate - AI is not killing us anytime soon.

There are several other things that might kill us, but an AI killing us is a probability that is still way into the future for now. Chances are none of us would be alive by then.

You know what might kill you? Bad television.