OpenAI launches GPT-5.4 Cyber model to rival Claude Mythos
What's the story
OpenAI has announced the limited release of its latest artificial intelligence (AI) model, GPT-5.4-Cyber. The new system is specifically designed for cybersecurity applications and is said to be more effective at detecting security vulnerabilities than its predecessors. The move comes just days after Anthropic unveiled its own cybersecurity-focused AI model, Claude Mythos, which claimed to have discovered thousands of zero-day security vulnerabilities.
Model details
What is GPT-5.4-Cyber
In a blog post, OpenAI revealed that GPT-5.4-Cyber is a specialized version of its existing GPT-5.4 model. The company said the new system "lowers the refusal boundary for legitimate cybersecurity work," allowing companies and researchers to use it for detecting security flaws—something that the standard GPT-5.4 may refuse to do.
Functionality
Model can examine software for malware potential, vulnerabilities
The new model will enable researchers and cybersecurity experts to examine software for "malware potential, and vulnerabilities" without requiring access to its source code. OpenAI has also made some guardrails more lenient, allowing cybersecurity experts to see how the model performs in adversarial settings and if it could be abused by bad actors.
User access
Available to select users in Trusted Access for Cyber program
Access to GPT-5.4-Cyber is currently limited to participants in OpenAI's Trusted Access for Cyber program. This initiative is aimed at vetted cybersecurity experts, researchers, and organizations working on defense and threat prevention. These users are selected based on their expertise and are tasked with systematically testing the model while providing detailed feedback that can be used to improve the system before any wider release.
Testing strategy
GPT-5.4-Cyber's limited rollout reflects a growing trend in AI industry
The limited rollout of GPT-5.4-Cyber is part of a broader industry trend toward stress-testing powerful AI systems before wider deployment. The company hopes that insights gathered from this controlled testing will help strengthen the system and refine its defenses. This strategy mirrors established practices in cybersecurity, where ethical hackers are invited to probe systems for flaws before they can be exploited in real-world attacks.