NVIDIA GTC Showcases Virtual Worlds Powering the Physical AI Era



Editor’s note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners, and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.

NVIDIA GTC last week showcased a turning point in physical AI: Robots, vehicles and factories are scaling from single use cases and isolated deployments to sophisticated enterprise workloads across industries. 

At the center of this shift are new frontier models for physical AI, including NVIDIA Cosmos 3, NVIDIA Isaac GR00T N1.7 and NVIDIA Alpamayo 1.5. 

NVIDIA also released the NVIDIA Physical AI Data Factory Blueprint, designed to push the state of the art in world modeling, humanoid skills and autonomous driving, as well as the NVIDIA Omniverse DSX Blueprint for AI factory digital twin simulation.

Open source agentic frameworks such as OpenClaw extend the AI stack all the way to operations — enabling long‑running “claws” that use tools, memory and messaging interfaces to orchestrate workflows, manage data pipelines and execute tasks autonomously on dedicated machines. 

“With NVIDIA and the broader ecosystem, we’re building the claws and guardrails that let anyone create powerful, secure AI assistants,” said Peter Steinberger, creator of OpenClaw, in an NVIDIA press release from GTC. 

OpenUSD is a driving force behind the scalability of physical AI — providing a common, scene‑description language that lets teams bring computer-aided design (CAD) data, simulation assets and real‑world telemetry into a shared, physically accurate view of the world. 

Simulating the AI Factory Before It’s Built

Modern AI factories are complex — spanning thermals, power grids, network load and mechanical systems. Building them on time and on budget becomes much easier when using simulation technology. 

To tackle this, NVIDIA introduced the Omniverse DSX Blueprint at GTC, a reference architecture that unifies simulation across every layer of an AI factory through a single digital twin. This enables operators to optimize performance and efficiency before a rack is installed in the real world.

Compute Is Data: Real-World Data Is No Longer the Moat

Real-world data used to function as a moat for physical AI — but it doesn’t scale. The real world is messy, unpredictable and full of edge cases, and the pipelines to process, simulate and evaluate data are fragmented. The bottleneck isn’t just data — it’s the entire data factory.

To help address this, NVIDIA introduced at GTC its Physical AI Data Factory Blueprint, an open reference architecture that transforms compute into large-scale, high-quality training data. Built on NVIDIA Cosmos open world foundation models and the NVIDIA OSMO operator, it unifies data curation, augmentation and evaluation into a single pipeline, enabling developers to generate diverse, long-tail datasets from limited real-world inputs.

Leading physical AI developers including FieldAI, Hexagon Robotics, Linker Vision, Milestone Systems, Skild AI and Teradyne Robotics are already tapping the blueprint to speed up robotics projects, vision AI agents and autonomous vehicle programs.

Microsoft Azure and Nebius are the first cloud platforms to offer the blueprint, turning world-scale compute into turnkey data production engines.

“Together with cloud leaders, we’re providing a new kind of agentic engine that transforms compute into the high-quality data required to bring the next generation of autonomous systems and robots to life,” said Rev Lebaredian, vice president of Omniverse and simulation technologies at NVIDIA, in this press release. “In this new era, compute is data.”

From OpenUSD to Reality: Seamless Design to Deployment

Converting CAD files to OpenUSD is a critical step in the physical AI pipeline — transforming engineering data into simulation-ready assets that developers can use to build, test and validate robots in physically accurate virtual environments. 

Using tools like the NVIDIA Omniverse Kit software development kit and NVIDIA Isaac Sim, teams can optimize and enrich 3D data for real-time rendering, simulation and collaborative workflows.  

Companies including FANUC and Fauna Robotics are using this seamless CAD-to-OpenUSD workflow to speed up robotic system design and validation.

Transforming Manufacturing and Logistics Through Industrial Digital Twins

“Factories themselves are now robotic systems,” Lebaredian said during his special address on digital twins and simulation at GTC. 

All factories are born in simulation. The NVIDIA Mega Omniverse Blueprint provides enterprises with a reference architecture to design, test and optimize robot fleets and AI agents in a physically accurate facility digital twin before a single robot is deployed on the floor. 

KION, working with Accenture and Siemens, is using this blueprint to build large-scale warehouse digital twins that train and test fleets of NVIDIA Jetson-based autonomous forklifts for GXO, the world’s largest pure-play contract logistics provider. 

Physical AI Steps From Simulation to the Real World

NVIDIA is partnering with the global robotics ecosystem — including leading robot brain developers, industrial robot giants and humanoid pioneers — to enhance production-level physical AI. 

ABB Robotics, FANUC, KUKA and Yaskawa, which have a combined global install base of over 2 million robots, are using NVIDIA Omniverse libraries and NVIDIA Isaac simulation frameworks to validate complex robot applications and production lines through physically accurate digital twins. These companies have also integrated NVIDIA Jetson modules into their controllers to enable real-time AI inference. 

Robot development starts with the robot brains, which is why leading developers including FieldAI and Skild AI are building theirs using NVIDIA Cosmos world models for data generation and Isaac simulation frameworks to validate policies in simulation. 

Meanwhile, Generalist AI is using NVIDIA Cosmos to explore generating synthetic data. This combination allows robots to become proficient in any task — from supply chain monitoring to food delivery — at an exceptional pace. 

Read all of NVIDIA’s announcements from GTC on this online press kit and watch the keynote replay. Catch up on all Physical AI Days sessions from GTC and watch the developer livestream replay.

NVIDIA GTC 2026: Live Updates on What’s Next in AI



Wednesday, March 18, 5:30 p.m. PT 🔗

20 Years of CUDA: Honoring the Architects of the Accelerated Age 

What began in 2006 as a bold parallel computing bet has evolved into the foundational heartbeat of modern science and AI. 

At GTC, NVIDIA is marking two decades of CUDA — representing the efforts of over 6 million developers innovating across every layer of the computing stack. Today, it serves as a generational bridge between the pioneers who wrote the first kernels and the next wave of builders deploying trillion-parameter AI models.

Led by NVIDIA CUDA Architect Stephen Jones, a panel at GTC Wednesday featured a group of researchers and engineers from Jump Trading, Meta Superintelligence Labs and NVIDIA who highlighted the decades of innovation behind CUDA, how it helps developers solve some of the world’s most complex problems — and how systems like the NVIDIA DGX Spark desktop AI supercomputer will enable the next generation of CUDA developers.

The group shared memories of the early days of CUDA — when “nobody wanted GPUs,” said Paulius Micikevicius, a software engineer at Meta Superintelligence Labs. “We had to go and beg them to consider using GPUs.”

During that time, Wen-Mei Hwu, senior distinguished research scientist and senior research director at NVIDIA, then a professor at the University of Illinois Urbana-Champaign, decided to build a 200-GPU system in two months with a group of grad students.

“A couple of weeks later, 200 GPU boards arrived, and power supply and everything — but there’s no chassis. So we ended up building wood frames for each of these boards … and we ran the Green500 [benchmark] and we got No. 3,” Hwu said. “That was the moment I realized that the energy efficiency of GPUs has incredible potential.” 

As the scale of accelerated computing has shifted to rack-scale systems and AI factories, the panelists see desktop AI systems like DGX Spark as a new way forward for prototyping and early development. 

“As long as you have that capability to do that initial exploration and something that fits on your desk or your lap, that’s the critical thing,” said Kate Clark, distinguished devtech engineer at NVIDIA. “I don’t see that going anywhere anytime soon. We’ll always have CUDA everywhere.”

Monday, March 16, 1:30 p.m. PT 🔗

NVIDIA cuDF and cuVS Adopted by World’s Leading Data Platforms, Fueling Modern Enterprise Data Processing

Enterprises are generating hundreds of zettabytes each year, and organizations are racing to turn that information into insights. NVIDIA cuDF and cuVS — accelerated data libraries built on NVIDIA CUDA‑X — are being adopted by data platforms across industries to deliver up to 5x faster performance while reducing costs for structured and unstructured data processing. 

Integrated with the world’s most widely used open source data engines — downloaded over 200 million times monthly by developers — these libraries are harnessed across enterprise data platforms, databases and data lakes. This helps organizations accelerate innovation, develop more accurate models and process more data while managing costs.

For structured data, NVIDIA cuDF accelerates open source data processing engines such as Apache Spark, Presto, DuckDB, Polars and Velox, delivering up to 5x faster processing compared with CPU-only deployments. 

For unstructured data — which represents 80% of today’s enterprise data and is growing rapidly — NVIDIA cuVS accelerates leading engines including FAISS, Amazon OpenSearch Service and Milvus. This helps agents and applications extract context, facts and recommendations from vast stores of text, images and video in a fraction of the time.

Powering Enterprise Data Processing Platforms

Google Cloud integrates NVIDIA cuDF to accelerate Apache Spark within Dataproc and cuDF can be easily used within Google Kubernetes Engine (GKE) to reduce processing times for massive ETL jobs from hours to seconds while lowering compute costs. 

At Snap, which serves more than 946 million active users, NVIDIA cuDF on GKE cut daily data processing costs by 76%. This enables 10 petabytes of data to be analyzed within a three-hour window — saving millions of dollars.

“Our collaboration with NVIDIA and Google Cloud helps us innovate faster for more than a billion Snapchatters worldwide,” said Saral Jain, chief information officer of Snap. “By lowering data processing costs and scaling experiments across petabytes of data, we’re delivering AI-powered experiences more quickly and efficiently.”

IBM watsonx.data is a hybrid, open data platform that includes open source analytics engines such as Apache Spark and Presto engines for structured data, and a vector engine based on OpenSearch. In early experiments with Nestlé’s Order-to-Cash mart, watsonx.data with NVIDIA cuDF accelerated workloads ran five times faster, with 83% lower cost savings. 

“For a company that serves billions, data underpins decision making across our global operations,” said Chris Wright, chief information and digital officer of Nestlé. “Working with IBM and NVIDIA, a targeted proof of concept has demonstrated the ability to refresh global operations data in a few minutes and at reduced cost. Our focus now is on turning this capability into tangible business impact — further improving decision speed in areas such as manufacturing and warehousing, and scaling these capabilities across our enterprise.”

The Dell AI Data Platform with NVIDIA includes accelerated data engines that enable enterprises to quickly and securely activate their Dell AI Factory with AI-ready data. It features an Apache Spark-based processing engine accelerated with NVIDIA cuDF, delivering up to 3x faster performance, and an enterprise-grade vector database accelerated with NVIDIA cuVS, delivering up to 12x higher throughput for vector indexing compared with CPUs.

​​“Purpose-built for agentic AI, the Dell AI Data Platform with NVIDIA uses accelerated data processing engines to make multimodal data AI-ready in hours instead of days,” said Michael Dell, chairman and CEO of Dell Technologies.

Oracle announced that Oracle Private AI Services Container can greatly accelerate vector index creation in Oracle AI Database using NVIDIA cuVS, helping organizations speed up AI-enabled decisions with the latest information.

“Enterprise AI is moving from experimentation to production,” said Clay Magouyrk, CEO of Oracle. “Oracle AI Database with NVIDIA technology delivers AI-ready data within minutes, enabling applications that were previously impossible.” 

NVIDIA cuDF and cuVS are supported by leading enterprise data platforms including EDB Postgres AI, NetApp, Snowflake, Starburst and VAST Data — setting the foundation for the AI‑powered future of data processing.


Monday, March 16, 1:30 p.m. PT 🔗

NVIDIA Launches cuEST for Accelerated Quantum Chemistry in Semiconductor Design

NVIDIA this week launched NVIDIA cuEST, a new NVIDIA CUDA-X library that shifts electronic-structure calculations onto GPUs. Applied Materials, Samsung, Synopsys and TSMC are among the initial adopters.

A leading-edge chip now contains over 50 billion transistors. Engineering them requires answering fundamental physics questions at the atomic scale: how electrons bond, how they migrate and how they interact across films just a few atoms thick.

“As semiconductor scaling reaches the physical limits of materials, the industry requires a massive increase in computing performance to simulate the quantum mechanics of next-generation chip designs,” said Tim Costa, general manager for industrial and computational engineering at NVIDIA. “With NVIDIA cuEST, industry leaders can move past the quantum bottleneck and take high-fidelity chemical modeling directly into production to accelerate semiconductor innovation.”

Industry Impact

  • Applied Materials: Applied Materials uses cuEST-accelerated density functional theory (DFT) to model challenging structures, predict material properties and study reaction pathways.
  • Samsung: Samsung integrated cuEST into its internal pipeline, already accelerated on GPUs, to deliver yet another up to 5x end-to-end speedup for key quantum-chemistry workloads.
  • Synopsys: Powered by cuEST and QuantumATK, Synopsys expanded its functionality to include Gaussian-basis DFT, accelerating simulations up to 30x for semiconductor workflows.
  • TSMC: TSMC uses cuEST’s accelerated quantum chemistry to advance processes for next-generation silicon design.

From the Lab to the Fab

The most common method for atomistic modeling is density functional theory. DFT offers a strong balance between accuracy and scalability; however, its computational cost has limited its widespread use in industry, keeping most applications confined to research. With cuEST, NVIDIA makes high‑accuracy quantum‑chemistry feasible at an industrial scale and in real production workflows.

Historically, the industry has relied on CPU clusters to run these simulations, evaluating candidate materials, including gate dielectrics and interconnect metals, one batch at a time over hours or days.

cuEST provides optimized routines so GPUs can accelerate the core matrices of a Gaussian-basis DFT calculation, including overlap, kinetic energy, nuclear attraction, Coulomb and exchange-correlation. It also supports functional approximations ranging from standard generalized gradient approximation to hybrid functionals, allowing engineers to balance computational cost with accuracy.

NVIDIA’s goal for cuEST: moving high-fidelity material modeling from the lab to the fab.

Learn more about cuEST by joining the NVIDIA demo booth and Synopsys’ booth at GTC, and dive deeper in the GTC session, “Next-Generation Discovery: Agentic AI for Science, AI-Driven Simulation and GPU-Accelerated Chemistry.”