NVIDIA GTC Showcases Virtual Worlds Powering the Physical AI Era



Editor’s note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners, and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.

NVIDIA GTC last week showcased a turning point in physical AI: Robots, vehicles and factories are scaling from single use cases and isolated deployments to sophisticated enterprise workloads across industries. 

At the center of this shift are new frontier models for physical AI, including NVIDIA Cosmos 3, NVIDIA Isaac GR00T N1.7 and NVIDIA Alpamayo 1.5. 

NVIDIA also released the NVIDIA Physical AI Data Factory Blueprint, designed to push the state of the art in world modeling, humanoid skills and autonomous driving, as well as the NVIDIA Omniverse DSX Blueprint for AI factory digital twin simulation.

Open source agentic frameworks such as OpenClaw extend the AI stack all the way to operations — enabling long‑running “claws” that use tools, memory and messaging interfaces to orchestrate workflows, manage data pipelines and execute tasks autonomously on dedicated machines. 

“With NVIDIA and the broader ecosystem, we’re building the claws and guardrails that let anyone create powerful, secure AI assistants,” said Peter Steinberger, creator of OpenClaw, in an NVIDIA press release from GTC. 

OpenUSD is a driving force behind the scalability of physical AI — providing a common, scene‑description language that lets teams bring computer-aided design (CAD) data, simulation assets and real‑world telemetry into a shared, physically accurate view of the world. 

Simulating the AI Factory Before It’s Built

Modern AI factories are complex — spanning thermals, power grids, network load and mechanical systems. Building them on time and on budget becomes much easier when using simulation technology. 

To tackle this, NVIDIA introduced the Omniverse DSX Blueprint at GTC, a reference architecture that unifies simulation across every layer of an AI factory through a single digital twin. This enables operators to optimize performance and efficiency before a rack is installed in the real world.

Compute Is Data: Real-World Data Is No Longer the Moat

Real-world data used to function as a moat for physical AI — but it doesn’t scale. The real world is messy, unpredictable and full of edge cases, and the pipelines to process, simulate and evaluate data are fragmented. The bottleneck isn’t just data — it’s the entire data factory.

To help address this, NVIDIA introduced at GTC its Physical AI Data Factory Blueprint, an open reference architecture that transforms compute into large-scale, high-quality training data. Built on NVIDIA Cosmos open world foundation models and the NVIDIA OSMO operator, it unifies data curation, augmentation and evaluation into a single pipeline, enabling developers to generate diverse, long-tail datasets from limited real-world inputs.

Leading physical AI developers including FieldAI, Hexagon Robotics, Linker Vision, Milestone Systems, Skild AI and Teradyne Robotics are already tapping the blueprint to speed up robotics projects, vision AI agents and autonomous vehicle programs.

Microsoft Azure and Nebius are the first cloud platforms to offer the blueprint, turning world-scale compute into turnkey data production engines.

“Together with cloud leaders, we’re providing a new kind of agentic engine that transforms compute into the high-quality data required to bring the next generation of autonomous systems and robots to life,” said Rev Lebaredian, vice president of Omniverse and simulation technologies at NVIDIA, in this press release. “In this new era, compute is data.”

From OpenUSD to Reality: Seamless Design to Deployment

Converting CAD files to OpenUSD is a critical step in the physical AI pipeline — transforming engineering data into simulation-ready assets that developers can use to build, test and validate robots in physically accurate virtual environments. 

Using tools like the NVIDIA Omniverse Kit software development kit and NVIDIA Isaac Sim, teams can optimize and enrich 3D data for real-time rendering, simulation and collaborative workflows.  

Companies including FANUC and Fauna Robotics are using this seamless CAD-to-OpenUSD workflow to speed up robotic system design and validation.

Transforming Manufacturing and Logistics Through Industrial Digital Twins

“Factories themselves are now robotic systems,” Lebaredian said during his special address on digital twins and simulation at GTC. 

All factories are born in simulation. The NVIDIA Mega Omniverse Blueprint provides enterprises with a reference architecture to design, test and optimize robot fleets and AI agents in a physically accurate facility digital twin before a single robot is deployed on the floor. 

KION, working with Accenture and Siemens, is using this blueprint to build large-scale warehouse digital twins that train and test fleets of NVIDIA Jetson-based autonomous forklifts for GXO, the world’s largest pure-play contract logistics provider. 

Physical AI Steps From Simulation to the Real World

NVIDIA is partnering with the global robotics ecosystem — including leading robot brain developers, industrial robot giants and humanoid pioneers — to enhance production-level physical AI. 

ABB Robotics, FANUC, KUKA and Yaskawa, which have a combined global install base of over 2 million robots, are using NVIDIA Omniverse libraries and NVIDIA Isaac simulation frameworks to validate complex robot applications and production lines through physically accurate digital twins. These companies have also integrated NVIDIA Jetson modules into their controllers to enable real-time AI inference. 

Robot development starts with the robot brains, which is why leading developers including FieldAI and Skild AI are building theirs using NVIDIA Cosmos world models for data generation and Isaac simulation frameworks to validate policies in simulation. 

Meanwhile, Generalist AI is using NVIDIA Cosmos to explore generating synthetic data. This combination allows robots to become proficient in any task — from supply chain monitoring to food delivery — at an exceptional pace. 

Read all of NVIDIA’s announcements from GTC on this online press kit and watch the keynote replay. Catch up on all Physical AI Days sessions from GTC and watch the developer livestream replay.

Listen to this! The Nothing Ear (a) earbuds have dropped to only $59 during Amazon’s Big Spring Sale



You can usually always find a pair of cheap earbuds at some retailer or another, but finding ones that don’t sacrifice audio quality or overall design can be tougher. Right now, Amazon has cut the price of the Nothing Ear (a) earbuds by 46% as part of its Big Spring Sale, bringing the buds down to just $59 from their normal price of $109.

While these wireless earbuds are known for their low price tag, they still boast pretty solid audio quality, a comfortable fit with a stalked, Airpod-like design, and a charging case that’ll yield enough battery life that you’ll rarely have to go out of your way to charge them. In particular, we absolutely loved the Yellow configuration when we reviewed these, though the discount is also available for the earbuds in both Black and White.

best wireless earbuds out there, especially when it comes to having top-tier ANC and battery life; you prefer earbuds that don’t come with a set of silicone ear tips; you’d rather go with wireless over-ear headphones than earbuds.

The Nothing Ear (a) earbuds offer a more affordable take on the big sibling Ear earbuds. Still, they come with a pretty good audio profile overall, active noise cancellation, and a long-lasting battery that would make most surprised that these go for as low as they do.

When utilizing the charging case in between listens, these earbuds can offer up to 42.5 hours of playtime. On a single charge, the Ear (a) earbuds offer up to 10 hours of playback when not using ANC, while charging replenishes around two hours of battery life with just a 10-minute charge.

What Do You Mean Glen Powell Is Fox McCloud in the ‘Super Mario Galaxy’ Movie



Just one day after the surprise reveal that the Super Mario Galaxy movie would be bringing Fox McCloud—of Star Fox fame—to the big screen for reasons unknown, we’ve now learned who’s playing him, and it’s only getting weirder than the “Donald Glover is Yoshi” moment.

This morning, actor Glen Powell confirmed that he is the one bringing Fox McCloud to the big screen for the Super Mario Galaxy Movie, via a very cute video in which he is actually cosplaying as Fox McCloud.

More specifically, imagine if the success of the ’90s Mario movie somehow led to a Star Fox adaptation—just in time for Star Fox 64—but because we’re still in that era of video game adaptation where people can’t quite trust the source material still, Fox and all his friends are humans instead of anthropomorphic animals, but still made to look vaguely like their gaming counterparts. The hair, the cosplay being surprisingly solid but also still kind of oddly retro, it all fits that kind of vision. But it is 2026! And here we are, with Glen Powell’s Fox McCloud in a Mario movie.

Suffice to say, this all makes it certainly seem like Fox is going to have a bigger role than simply being a cameo queen. You don’t give a blink-and-you’ll-miss-it easter egg character a standalone poster, and you also probably don’t cast one of Hollywood’s most popular leading men of the past couple of years as them, either. Is the Super Mario Galaxy movie really setting up some kind of Nintendo cinematic universe that’s going to climax in, well… some kind of melee? Or some kind of brawl? Or some kind of… well, it doesn’t work with Smash Bros. Ultimate. But you get what we mean.

Glen Powell is Fox McCloud. Anything is possible.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

Slay the Spire 2 Mini-Review: Familiar Foundations, Fresh Co-Op Brilliance


Blog | Review

At first glance, Slay the Spire 2 in Early Access feels like a slightly expanded version of the original. You start with Ironclad again, and most of the launch roster features familiar faces, with only two of the five characters being new. Those new additions bring dramatically different playstyles, which helps a lot, but the overall experience can still feel a little too familiar early on. It can even seem somewhat basic when you compare it to the many games that have built upon the formula since the first game was released.

Slay the Spire 2 in Early Access on PC

That feeling starts to shift as you spend more time with it. You gradually unlock features that add more depth and variety to each run. New paths open up, relics introduce fresh ideas, and builds become far more interesting. The game also mixes in different level types alongside the standard progression, which helps keep things feeling varied. These additions make runs feel much fresher overall, though there are still features many players would like to see added. It’s still Early Access, though, so there’s plenty of room for that to happen.

The co-op mode is what really makes the game shine. It changes how you approach every run and adds a new layer of strategy. You and your teammates need to think carefully about how your characters and builds interact, which leads to some fantastic moments. You’re not just playing side by side – you’re actively supporting each other throughout. At rest sites, you can choose to heal a teammate instead of yourself, and some builds focus heavily on supporting others.

You can also share your block with another player, letting them go all-in on offence instead. This adds an entirely new dimension to the game, and each additional player deepens that complexity even further. It opens the door to creative strategies and leads to some memorable wins and losses along the way. If this is the game at its earliest stage, then Slay the Spire 2 has the potential to become something truly remarkable by the time it’s finished.


Jason Coles

Jason likes to focus on roguelikes and co-op games; in a dream world he’d make a living writing about Dark Souls. As well as being a writer he also does personal training and accounting and can occasionally be seen on other people’s streams. Being a big fan of fluffy things means he has two cats, both of whom refuse to let him sleep, but at least they are cute.

‘Realtime-with-pause is not dead,’ says lead designer of promising turn-based game Star Wars Zero Company


When it was announced that Baldur’s Gate 3 would have turn-based combat unlike its predecessors, I reacted like the little girl with the frog from The Cabin in the Woods: “The evil has been defeated!” Our long national nightmare was over. And by “nightmare” I mean games expecting us to control an entire party of characters in realtime, with the ability to pause and issue commands tacked on like a clumsy panacea.

Not everyone sees it the same way. Even the people making games with turn-based tactical combat like Star Wars Zero Company can’t be bothered disliking RTWP combat as vehemently as me. “Realtime-with-pause is not dead,” lead designer James Brawley told PC Gamer’s Ted Litchfield during his recent hands-on preview. “It will have its day. Someone will make something wonderful in that space, and it’ll take the world by storm again.”

The Friday Roundup – Bad Color Results and A.I. Storytelling


Man shocked at bad color grade due to monitor calibration

Your Color Grading Looks Wrong… Here’s Why

A while back now my wife started doing the final editing of some of her videos rather than having to wait around for me to get to it.

Anyway that was fine cos’ hey, it’s not like she was paying me!

Anyway one of the first hurdles we had to get over was that the videos she was uploading to YouTube had absolutely terrible color.

The workflow we had was that I would run the original footage through DaVinci Resolve making sure that the color and the audio were within correct parameters.

I would then pass it on to her for the editing part but to her, the color looked bad so she would then “correct” it.

That correction was the actual source of the problem because her monitor was displaying color incorrectly.

To fix that problem I had to get her to stop applying color correction to the video when she received it just because on her monitor it all looked wrong.

The real problem here is that all monitors and phone screens display color differently so for me I do two things.

First, I calibrate my monitor as best I can and secondly, I correct my colors using tools like you find in the Resolve Color Page so I don’t rely on how it looks to me.

I use those scopes and other tools to make sure the colors are correct independent of how they look on my monitor or anyone else’s for that matter.


Overview of AI Storytelling in PowerDirector 365

This is a demonstration of the new(ish) AI Storytelling module in PowerDirector 365.

First up, this is only available in the subscription version of PowerDirector and using it costs credits like many of the A.I. features in software these days so there’s that.

It is an interesting deployment of A.I and although don’t see it as a total solution to creating a story, I think it could be useful as a starting point to at least get some ideas on the timeline.


How to Record Video on PC (Screen Recording Tutorial)

These days just about all consumer level video editing programs come with some type of free screen recording function.

Most are accessible from within the program itself but in reality operate as a standalone module.

Back in the day this wasn’t the case and in fact a good quality screen recorder would set you back at least over $100!

So here’s the kind of interesting part of the new breed of screen recorders that are packaged with editing software.

I test a lot of software to work out which ones I would recommend and of course check out the screen recorders while I am at it.

After having done that many, many times I have realized they all look almost exactly the same!

Sure there are slight differences but but those differences are purely cosmetic and what I suspect is that they are all the same licensed software from whoever built one of the original ones.

Anyway, conspiracy theories aside here’s a tutorial form the Movavi guys showing how to correctly setup your screen recorder to get the best results.


What’s New in Filmora 15.3 (Part 2): HDR Color Wheel, AI Effects, Nano Banana 2 & More!

This is Part 2 in the series covering the latest updates to Filmora 15.

Although they have been hitting the old A.I. button pretty hard, that’s certainly not the whole story!

There have been new features as well as some great upgrades to existing ones.

The HDR Color wheel is a pro level addition to what have become over time some excellent color correction and grading tools.


How to Use the New Relight Feature in Filmora 15

A week or two ago I posted a video highlighting some of the recent updates to Filmora 15.

One of the new features mentioned in that video was the Relight module but in that video there was no real demonstration of what it actually did.

So to fill that gap here’s a more complete explanation of the Relight feature covering what it does, how to access it and the available settings.


How to Make an Appeal Video for YouTube Inauthentic Content Demonetization

When A.I. content first started appearing on YouTube they announced they would be treating it pretty much the same a Google was treating all A.I. content.

That policy was basically they did not really care how the content was made as long as that content served the purpose for which is was intended and was of reasonable quality.

That was all very well at the time for YouTube until the amount of A.I. generated content started going through the roof!

Their first step in roping the situation in was that they required the creator to identify on uploading exactly to what degree A.I. had been used making the video.

That seemed fair enough at the time however the sheer volume of A.I. content of questionable quality has forced them into a more recent development.

Most likely through their own algorithm as well as A.I. they began a bit of a purge of channels creating such content by demonetizing them for “Inauthentic Content.”

As is usually the case with these automated actions a whole bunch of innocent parties got caught up in the mess and were incorrectly demonetized.

Initially such channel owners had very little success trying to appeal these decisions however recently there seems to have been an easing of the situation.

Here’s a video from Jacky Nguyen showing how he managed to get through the appeal process successfully.

Fair Warning!

Very soon after Jacky posted the video above on dealing with an “Inauthentic Content” demonetization by YouTube he has posted the one you will find below.

Essentially it is a warning to slow down, wait and be patient if you want to deal with the situation because rushing in may result in the situation getting worse!


Shot Size for Beginners

One of the “hidden” factors in shooting videos that is strangely ever present is that of shot size.

By that we are referring to the amount of frame real estate the subject we are shooting occupies.

Very often we refer to shots as being wide, medium or closeup but the effect any of these shots has on the audience is the more important factor.

As newbies to shooting video we often tend to shoot based on what’s available, what is presented to us in a given circumstance.

The result of this is that often the shot itself doesn’t really convey the idea we had in the first place.

So here’s a rundown of the basic shot sizes with an explanation of what each one does with regards to audience perception or how it serves the video you are making.


Shot Angles for Beginners

Carrying on from the video above on shooting basics, in this one they cover “shot angle.”

This is another addition to the language of video that we all really need to understand.

As I have said before, I don’t expect amateurs to be religiously applying this information in their “How I grow Tomatoes” videos!

The reason I add it is so that we can all understand and be aware that certain angles and sizes of shot will create an effect which we may want or not want!


Why Sound Is 51% of This Scene (One Battle After Another)

This just a very interesting scene-by-scene breakdown of the audio structure being created at a fully professional level.

Other than reiterating the need to address audio as an important part of video… not much to learn here really!




Discover more from The DIY Video Editor

Subscribe to get the latest posts sent to your email.

Judge Blocks Pentagon’s Attempt to Blacklist Anthropic


A federal judge on Thursday temporarily blocked the Trump administration from labeling Anthropic a “supply chain risk” and cutting off the artificial intelligence firm’s access to federal contracts.

US District Judge Rita Lin granted Anthropic’s request for a preliminary injunction, finding that the Trump administration’s “broad punitive measures” against the company “were likely unlawful” and could “cripple Anthropic.”

“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government,” Lin wrote in her ruling.

(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

The dispute centers on the Pentagon’s demand to use Anthropic’s Claude AI for “all lawful purposes,” while Anthropic wanted to prohibit the military from using it for mass domestic surveillance or for fully autonomous weapons systems. After Anthropic refused to meet the government’s demands, President Donald Trump and Secretary of Defense Pete Hegseth said they would declare the company a “supply chain risk,” prohibiting the use of its products in defense contract work. 

Anthropic responded with a lawsuit filed earlier this month in federal court challenging the designation, calling it an “unprecedented and unlawful” attack on the company’s right to free speech. 

Lin wrote that the administration’s measures don’t appear to reflect the government’s national security interests but rather seem punitive in nature.

“If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude. Instead, these measures appear designed to punish Anthropic,” Lin wrote.

Lin also delayed her order for one week to allow the Pentagon to seek a stay of the order.

Anthropic said in a statement that it was “grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”

The White House and Pentagon didn’t immediately respond to a request for comment.



Former Diablo devs left in limbo after $65,000 in fake Kickstarter pledges overwhelm new ARPG Darkhaven: “We are taking a few more days to consider the best course of action”



Former Diablo and Diablo 2 devs have been left panicked after a crush of fake Kickstarter pledges came for their eerie fantasy ARPG Darkhaven.

In a dev update on the crowdfunding website (h/t Phrasemaker), Diablo 2 artist and CEO of studio Moon Beast Productions Phil Shenk informs supporters, “We’ve verified that at least $65k in pledges were almost certainly insincere and won’t be collected. We’ve been in touch with Kickstarter, and they’ve given us the option to either keep the funds raised or cancel this campaign and refund all pledges.”

Google begins rolling out Search Live globally


Following a false start last week, Google has begun rolling out Search Live globally. The tool allows you to point your phone’s camera at an object or scene and ask questions about what you see in front of you. With today’s expansion, Google is making Search Live available in every location and language where it offers its AI Mode chatbot. With that, people in more than 200 countries and territories can use Search Live to get answers to their questions.

Behind the expansion is Google’s Gemini 3.1 Flash Live model. According to the company, the new AI system was designed to be natively multilingual, and capable of more natural conversations. It should also be more reliable and faster.

Separately from Search Live, Google is bringing Live Translate to iOS. Live Translate, if you need a reminder, allows you to put on a pair of headphones and get a real-time translation of what another person is saying. With today’s announcement, Google is also bringing the feature to more countries, including Germany, Italy, Spain, Japan and the UK, across both Android and iOS. All told, Live Translate can now understand more than 70 languages and work with any set of headphones. Neat.

Yakuza Kiwami 3 & Dark Ties Free Download (Deluxe Edition)


Yakuza Kiwami 3 & Dark Ties Preinstalled Worldofpcgames





Yakuza Kiwami 3 & Dark Ties Direct Download

YAKUZA KIWAMI 3 is an extreme remake of the action-adventure beat ’em up Yakuza 3 featuring ex-yakuza Kazuma Kiryu’s fight to protect those he loves. DARK TIES is a brand-new game featuring Yoshitaka Mine from Yakuza 3 included as a separate action-packed experience.

A LEGEND IS REBORN AND A NEW LEGACY BEGINS.
YAKUZA KIWAMI 3

Continue the story of ex-yakuza Kazuma Kiryu as he fights to protect those he loves in an extreme remake of Yakuza 3 that evolves every aspect of the beloved game. Monster Battles

The bustling streets of Okinawa and Tokyo come to life in stunning detail with reimagined combat taking brutal brawling action to the next level. Added scenes deliver more depth and emotion to the beloved story with new and enhanced side experiences that immerse you in the world like never before, and more.

DARK TIES

Experience the brand-new tale of Yoshitaka Mine from Yakuza 3 in an included separate game. Having once led a successful startup company, he plunged himself into the dark world of the yakuza by choice after losing everything. Left with an empty heart, the pursuit to find true bonds drives him forward once again in a dramatic journey colored by exhilarating boxing-based combat, and a variety of engaging side experiences.

Two men will walk different paths that converge to shake the very foundations of fate.

Features and System Requirements:

  • Modern remake with upgraded combat, visuals, and reworked story moments.
  • Explores Okinawa and Kamurocho with dense side quests and classic mini-games.
  • Improved brawler mechanics plus new styles and smoother Heat actions.
  • Dark Ties adds a darker side story focused on intense, aggressive combat.
  • Bundle delivers both Kiryu’s drama and a grittier parallel narrative.

Screenshots

System Requirements

Minimum
OS *: Windows 11
Processor: Intel Core i3-8100,3.6GHz or AMD Ryzen 3 2300X,3.5GHz
Memory: 8 GB RAM
Graphics: NVIDIA GeForce GTX1650,4GB or AMD Radeon RX6400,4GB or Intel Arc A380,6GB
DirectX: Version 12
Storage: 58 GB available space
Support the game developers by purchasing the game on Steam

Installation Guide

Turn Off Your Antivirus Before Installing Any Game

1 :: Download Game
2 :: Extract Game
3 :: Launch The Game
4 :: Have Fun 🙂

This version of the game does not include a traditional crack. It can be played using a hypervisor-based method. Please make sure to read and follow the instructions provided in the included Crack File carefully.