OpenAI launches $25,000 bio bug bounty to bypass GPT-5.5 filters
OpenAI just kicked off a bio bug bounty challenge, offering $25,000 to vetted security researchers who can find a way around the safety filters in its latest AI model, GPT-5.5.
The catch? You have to come up with a prompt that gets the model to answer five biosafety questions without tripping any moderation alarms, all from a fresh chat session.
The program runs from April 23 to June 22, with extra testing until July 27.
Select biosecurity and AI security invitees
Right now, only select biosecurity and AI security researchers using Codex Desktop are invited to take part.
Full wins get the top prize, but partial successes might earn smaller rewards too.
All discoveries stay confidential under a nondisclosure agreement.
This move follows an industry trend of red teaming AI, basically stress-testing models for weaknesses, to make sure future AIs are safer and more reliable for everyone.