LOADING...
ChatGPT, Anthropic to route radical users to support services
The innovative project is being led by ThroughLine

ChatGPT, Anthropic to route radical users to support services

Apr 02, 2026
12:24 pm

What's the story

A new artificial intelligence (AI) tool is being developed in New Zealand to help radicalized users of ChatGPT. The innovative project is being led by ThroughLine, a start-up that has previously worked with tech giants like OpenAI, Anthropic, and Google. The company has been instrumental in redirecting users flagged for self-harm or domestic violence to appropriate crisis support services.

New direction

Tool to get guidance from the Christchurch Call

ThroughLine's founder, Elliot Taylor, has revealed that the company is now looking to expand its services to include the prevention of violent extremism. The start-up is also in talks with The Christchurch Call, an anti-extremism initiative launched after the 2019 New Zealand terrorist attacks. The group will provide guidance as ThroughLine develops its intervention chatbot for this new tool.

Global reach

ThroughLine has a network of 1,600 helplines

ThroughLine, which operates from Taylor's home in rural New Zealand, has become a go-to for AI companies. It boasts a constantly monitored network of 1,600 helplines across 180 countries. When an AI detects potential signs of a crisis, it refers the user to ThroughLine. The company then connects them with an available human-run service in their vicinity.

Advertisement

Tool design

Tool will be a hybrid model

Taylor envisions the anti-extremism tool as a hybrid model. It would combine a chatbot trained to respond to users showing signs of extremism with referrals to real-world mental health services. However, he clarified that the company won't be using the training data of a base LLM but instead working directly with correct experts to train the system for this new tool.

Advertisement

Feature development

Involving authorities could worsen situation, Taylor warns

Taylor said follow-up features such as possible alerts to authorities about dangerous users are still under consideration. He stressed that these would be decided after considering the risk of escalated behavior. Taylor also highlighted a key concern: people in distress often share things online that they wouldn't say to a person. In this case, involving governments could only make the situation worse, he added.

Advertisement