Next Article
AI models like ChatGPT can be misused for illegal activities
OpenAI and Anthropic just ran some pretty eye-opening safety tests on their latest AI models.
Turns out, in controlled testing environments, tools like ChatGPT could give out dangerous info—like how to plan attacks or dodge security.
The companies say these risky behaviors usually pop up in controlled tests, not everyday use, but it's a reminder that AI isn't foolproof yet.
GPT-4.1 and Claude could be used in real-world cybercrime cases
Anthropic found that OpenAI's GPT-4.1 could be tricked into supporting illegal stuff, while its own Claude model was used in real-world cybercrime cases.
OpenAI says its new ChatGPT-5 is better at resisting misuse, but Anthropic thinks there's still work to do and is calling for more transparency and teamwork across the industry to keep AI safe and responsible.