Back in February, Elon Musk's OpenAI had claimed it had built an improved language processing algorithm that generated full texts, complete with nuances and contexts. However, the company didn't release its code, saying that the system was so good that it could easily be leveraged to spread fake news. Now, thanks to some creative engineers, we have a way to try it. Here's how. OpenAI, the artificial intelligence company known for its famous Dota 2 bot, developed an algorithm called GPT-2 to predict text. The system has been trained on some 8 million web pages, which it uses to predict words and write full paragraphs of content. It is said to be much more coherent than other predictive algorithms developed till date. OpenAI didn't release the code of the AI in order to prevent it from being misused by people who might generate fake news and propaganda. However, machine learning engineer Adam D King used the publicly available medium-sized version of their machine learning model to give a watered-down avatar of GPT-2 for trial. It is available on the website 'TalkToTransformer.com'. While the model site created by King can answer questions starting Q: and write full-blown essays, to-do lists, and screenplays, it's not fully perfect. We tried the system and found that the results are often incoherent. Although it can recognize a huge variety of inputs, including characters like Harry Potter, but everything remains inconsistent - with characters disappearing and the conversation going off-track. As the model built into the website is the base version of the one that OpenAI claims to be extraordinary, it is easy to expect such imperfections. We don't know if the full version will ever come out, but the presence of TalkToTransformer.com, sure makes it easier for the public to test the system and assess its potential advantages and risks.