NVIDIA's Groq-powered LPX platform set to challenge Google TPUs
NVIDIA is about to roll out its LPX inference platform at GTC 2026, powered by Groq's Language Processing Units (LPUs)—the result of a $20 billion licensing deal.
These chips are designed for lightning-fast AI responses, with standout specs like hundreds of megabytes of on-chip SRAM and 80TB/s bandwidth.
The upgraded GTC version packs 256 LPUs
The upgraded GTC version packs 256 LPUs, liquid cooling, and advanced Q-glass PCBs—making it a powerhouse for AI tasks.
Groq has said the LPX platform can run popular models significantly faster than GPUs, with vendor materials citing up to 10x speed or efficiency gains.
NVIDIA is also investing $30 billion in OpenAI
NVIDIA isn't just launching new hardware—it's also investing $30 billion in OpenAI and bringing Groq founder Jonathan Ross onboard as chief software architect.
With competition heating up from Google TPUs, AWS Trainium, and others, NVIDIA aims to combine its new LPX chips with Rubin GPUs for even better performance in the AI race.