To paraphrase some lyrics from my mispent youth pining for a Hot Topic, ‘this ain’t a scene, it’s a goddamn AI arms race’. The preview of DeepSeek V4 is now live, and given that the AI giant was reportedly not so keen to grant Nvidia and AMD early access to the new model, the American AI industry has been equally keen to outpace China-based businesses any way it can. Case in point, Nvidia is really pushing OpenAI’s GPT-5.5.
DeepSeek wouldn’t give Nvidia the sneak peek, so 10,000 of its employees got an early look at OpenAI’s latest frontier model instead. The company went with a widespread rollout of Codex, specifically, which is OpenAI’s agentic coding application powered by GPT-5.5. This has apparently resulted in big efficiency wins. “Debugging cycles that once stretched across days are closing in hours,” says Nvidia.
As with anything agentic, security was top of mind for the rollout. So, to “ensure maximum security and auditability” (and presumably to stop Codex from getting into anything it shouldn’t), all participating Nvidia employees were given a cloud-based virtual machine to run the AI agent from. From there, Codex can read company data but can’t directly edit it (or indeed, delete). To oversimplify, they locked the bot in a virtual plexiglass box.
The blog post also surfaces similarly gushing praise from employees, with choice quotes calling Codex’s results “mind-blowing” and “life-changing”. Codex was rolled out across plenty of company departments besides engineering and development, with folks working in “product, legal, marketing, finance, sales, HR, and operations” also getting to grips with the application (personally, I would love to hear the honest Codex thoughts of a grumpy office admin).
We tried a new thing with NVIDIA to roll out Codex across a whole company and it was awesome to see it work.Let us know if you’d like to do it at your company! pic.twitter.com/Xjn6ShrRuqApril 23, 2026
Such enthusiasm from Nvidia is no surprise, though. For one thing, the two companies have an ongoing relationship to the tune of billions (though Nvidia has scaled down what was originally a $100 billion investment in OpenAI, to a slightly less eye-watering $30 billion more recently).
Second, GPT-5.5 runs on Nvidia GB200 NVL72 rack-scale systems. This set-up is apparently “capable of delivering 35x lower cost per million tokens and 50x higher token output per second per megawatt compared with prior-generation systems,” making it an economically appealing enterprise model. Company head Jensen Huang quipped to OpenAI CEO Sam Altman over email, “Fire up those Blackwells. We need more tokens!”

Best gaming rigs 2026