OpenAI's new Codex model is all about speed
Technology
OpenAI just rolled out GPT-5.3-Codex-Spark, their fastest Codex model yet, powered by a supercharged Cerebras chip.
This is the first big result from OpenAI's team-up with Cerebras, and it's all about making AI tools respond faster and work smoother for users.
Spark is in research preview for ChatGPT Pro users
Spark runs on the new Wafer Scale Engine 3 chip (yep, 4 trillion transistors!), focusing on real-time collaboration and speedy prototyping—think less lag and more getting things done.
Right now, it's in research preview for ChatGPT Pro users.
Meanwhile, Cerebras is making waves too, scoring $1 billion in funding as they gear up for an IPO.