OpenAI's Codex can autonomously handle your organization's cybersecurity
OpenAI just dropped GPT-5.3-Codex, a new AI model built to handle cybersecurity on its own.
It's already impressing experts by spotting vulnerabilities, running attack simulations, and writing fixes, plus it claims to cut false alarms by 40% compared to older tools.
Access to GPT-5.3-Codex's advanced cybersecurity capabilities is currently being managed through a Trusted Access pilot requiring identity verification for individuals, enterprise enrollment via OpenAI reps, or invite-only researcher participation.
The model can spot hacking attempts in real-time
GPT-5.3-Codex is trained on millions of tricky prompts so it knows when to refuse unsafe tasks and can spot sneaky hacking attempts in real time.
You can set how much freedom the model has, from read-only mode up to full access, and decide if it gets internet access or not.
Access to Codex is currently being managed through a pilot
If you're a paid ChatGPT user, you can try out Codex for development projects.
For high-risk cybersecurity use, access to the model's advanced capabilities is being gated through a Trusted Access process; OpenAI has also pledged $10 million in API credits as part of a Cybersecurity Grant Program to support qualifying teams.
This one's definitely aimed at the big leagues!