NVIDIA and Partners Show That Software-Defined AI-RAN Is the Next Wireless Generation



AI-RAN is moving from lab to field, showing that a software-defined approach is the only viable way to build future AI-native wireless networks.

Ahead of Mobile World Congress (MWC), running March 2-5 in Barcelona, NVIDIA and Nokia announced new AI-RAN collaborations with top telecom operators across Europe, Asia and North America, powered by NVIDIA AI-RAN platforms. Industry pioneers T-Mobile U.S., SoftBank and Indosat Ooredoo Hutchison (IOH) passed implementation milestones, taking NVIDIA-powered AI-RAN outdoors and over the air.

New benchmarking results from partners like SynaXG showed that AI-RAN running on NVIDIA platforms delivers high-speed, carrier-grade performance — meaning extreme reliability — across multiple 5G spectrum bands. And over 20 AI-RAN Alliance demos built on NVIDIA platforms will be showcased at MWC, highlighting how AI is boosting 5G performance and efficiency, and unlocking new edge AI applications.

All of this represents momentum and convergence toward a common, software-defined foundation that will set the stage for secure, open and AI-native 6G systems.

AI-RAN Goes From Lab to Live

Top telecom operators and partners are using NVIDIA platforms to bring AI-RAN to commercial deployment. 

T-Mobile U.S. demonstrated concurrent AI and RAN processing on NVIDIA AI-RAN platform using Nokia’s CUDA-accelerated RAN software. In T-Mobile’s over-the-air field environment, Nokia’s AirScale massive multiple-input and multiple-output (MIMO) radio in the 3.7GHz band supported commercial devices running applications like video streaming, generative AI and AI-powered video captioning, alongside 5G. 

SoftBank’s AITRAS live field trial achieved an industry-first, 16-layer massive MIMO using fully software-defined 5G running on NVIDIA’s AI-RAN platform, marking an important technical milestone toward AI-RAN commercialization. 

IOH has implemented software-defined 5G with Nokia’s vRAN software on NVIDIA AI-RAN platforms, moving from proof of concept to pre-commercial field validation. This milestone was showcased at MWC through Southeast Asia’s first AI-powered 5G call, where AI and network intelligence operated seamlessly to enable secure, real-time cross-border connectivity, including responsive remote control of a robotic dog over the live 5G network. This achievement demonstrates IOH’s readiness to scale AI-native network capabilities and bring intelligent connectivity to communities across Indonesia.

SynaXG demonstrated fully software-defined AI-RAN using NVIDIA AI Aerial — a suite of accelerated computing platforms, software libraries and tools to build, train, simulate and deploy AI-native wireless networks — running 4G, 5G in both sub-6GHz [FR1] and millimeter wave [FR2] spectrum bands, alongside agentic AI workloads, on a single NVIDIA GH200 server. This marks the world’s first implementation of AI-RAN on FR2 bands.

SynaXG’s setup activated 20 component carriers with both a centralized unit (CU) and distributed unit (DU) on one platform, achieving a throughput of 36 Gbps and under 10 milliseconds latency. These breakthrough results highlight AI-RAN-based 5G performance as well as seamless orchestration between AI and RAN workloads.

Tripled Pace of AI-RAN Innovation

This year’s MWC will see triple the number of AI-RAN innovations over last year, with 26 out of 33 AI-RAN Alliance demos built using NVIDIA AI Aerial and a software-defined architecture.

Some of these demos include:

  • DeepSig is reinventing how devices “speak” to networks by letting AI learn a smarter signal format at both ends of the link — the communications channel that connects two devices. An AI‑native air interface jointly learns how to best encode and decode signals using neural techniques at the device and base station, removing pilot overheads and adapting to site‑specific channels. Early results on NVIDIA platforms show up to about 2x higher throughput and better spectral and energy efficiency from the same spectrum.
  • SUTD, NVIDIA and partners will show how robots and autonomous vehicles can distribute their “thinking” across the device, edge and cloud — bringing split-inferencing from concept to implementation. By deciding in real time where each AI task runs, the demos prove how AI-RAN can meet tight latency, privacy and coverage service-level agreements to scale physical AI and vision language models through the network edge.
  • zTouch Networks and partners built an AI-RAN orchestration blueprint showing how operators can safely share GPUs across AI and RAN workloads. By using NVIDIA Multi-Instance GPU technology, the blueprint steers resources in real time, maximizing GPU utilization and improving energy management while ensuring RAN quality of service. This is a key step for making multi-tenant AI-RAN solutions ready for commercial use, so operators can turn GPU capacity into revenue.
  • Northeastern University and SoftBank will demonstrate an AI switching solution for NVIDIA AI Aerial that flips seamlessly and without data loss between AI and classic algorithms for channel estimation. This selects, in real time, the best possible processing solution at all times depending on conditions, improving stability and throughput while proving AI can coexist with classical approaches.

“AI-RAN is emerging as a unifying architecture for future radio networks,” said Alex Choi, chair of the AI-RAN Alliance. “By aligning operators, vendors and researchers around software-defined, GPU-accelerated architectures, we are boosting innovation, validating new concepts quickly and building the foundation for AI-native 6G, now.”

As intelligence moves into the physical world, autonomous systems such as robots and cars depend on AI-RAN networks to see, sense, reason and act.

Capgemini is working within Project ULTIMO, a Horizon Europe-funded initiative, to show how AI-RAN can support large-scale autonomous mobility services across European cities. Autonomous shuttles equipped with the NVIDIA Jetson Orin module process sensor data locally, while select video and telemetry streams are sent over 5G to agentic AI applications on NVIDIA AI-RAN servers. These workloads handle scene understanding, incident and safety detection, and accessibility insights at scale, while mission-critical 5G gets priority access to GPU resources.

A Growing Ecosystem

A growing ecosystem of partners is forming around NVIDIA-powered AI-RAN platforms, enabling operators to choose from a range of deployment solutions. NVIDIA Aerial RAN Computer (ARC) platforms harness the NVIDIA Grace CPU and a variety of GPUs, providing a high-performance, energy-efficient compute foundation for AI-native RAN infrastructure.

  • Quanta Cloud Technology (QCT) is announcing commercial off-the-shelf AI-RAN products that support NVIDIA ARC platforms and Nokia software, giving operators standardized building blocks for AI-RAN.
  • Supermicro is extending support across the full NVIDIA AI-RAN portfolio, including NVIDIA ARC-Pro and NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, as well as ARC-Compact systems with Nokia software.
  • WNC has introduced a new AI-optimized indoor and outdoor open radio unit, integrated with NVIDIA AI Aerial Testbed and NVIDIA ARC platforms, that supports 5GA and 6G use cases.
  • Eridan has launched a 4T4R O-RU along with its 2T2R O-RU, which was integrated with NVIDIA AI Aerial, and a DU running on the NVIDIA DGX Spark desktop supercomputer, combining spectrally efficient radios with GPU-based baseband processing to create a powerful and portable outdoor base station.
  • LITEON has completed integration of its sub-6 GHz and millimeter wave radio units with NVIDIA AI Aerial, and has expanded its collaboration with ecosystem partners like Supermicro and SynaXG to accelerate AI-RAN commercialization.

Laying the Foundation for Open, Secure, AI-Native 6G

NVIDIA’s latest State of AI in Telecom report showed that the industry is stepping up AI-native RAN and 6G investments — signaling a major intercept ahead of the traditional 6G deployment cycle, with 77% of respondents anticipating a much faster time to deployment of this new AI-native wireless network architecture.

This latest progress on software-defined AI-RAN is setting the stage for secure, open and AI-native 6G systems.

NVIDIA has already open sourced NVIDIA Aerial CUDA-accelerated RAN libraries, fueling the pace of AI-RAN innovation. NVIDIA has also now joined the OCUDU (Open CU DU) Ecosystem Foundation, hosted by the Linux Foundation, contributing to open source RAN software development to accelerate research and commercialization for next-generation wireless networks.

Learn more by meeting NVIDIA and partners at Mobile World Congress. Explore key insights from the State of AI in Telecom survey.

NVIDIA Advances Autonomous Networks With Agentic AI Blueprints and Telco Reasoning Models



Autonomous networks — intelligent, self-managing telecommunications operations — are moving from a future vision to a current priority for telecom operators. In the latest NVIDIA State of AI in Telecommunications report, network automation emerged as the top AI use case for investment and return on investment.

Automation is different from autonomy. Beyond executing predefined workflows, autonomous networks must understand operator intent, reason over tradeoffs and decide what actions to take. Reasoning models and AI agents fine-tuned on telecom data are key to enabling this shift.

For networks to become autonomous, there’s a need for an end-to-end agentic system that includes key components like telco network models and AI agents that talk to each other and use network simulation tools to validate actions.

Ahead of Mobile World Congress Barcelona, NVIDIA unveiled an open NVIDIA Nemotron-based large telco model (LTM), a comprehensive guide for building reasoning agents for network operations, and new NVIDIA Blueprints for energy saving and network configuration with multi-agent orchestration to help operators advance toward autonomy.

And as part of GSMA’s new Open Telco AI initiative — launching tomorrow — NVIDIA is releasing the new open source LTM, implementation guide and agentic AI blueprints as open resources through GSMA, an organization for the mobile communications industry.

Open Nemotron 3 Large Telco Model Brings Reasoning to Telecom 

For telcos to successfully operationalize generative and agentic AI across their operations, AI models must have the ability to understand the language of telecom and reason through complex workflows. NVIDIA has collaborated with AdaptKey AI to release a new open source, 30-billion-parameter NVIDIA Nemotron LTM that operators around the world can use to build autonomous networks.

Built on the NVIDIA Nemotron 3 family of foundation models and fine-tuned by AdaptKey AI using open telecom datasets including industry standards and synthetic logs, the LTM is optimized to understand telecom industry terminology and reason through workflows such as fault isolation, remediation planning and change validation.

As an open model, the Nemotron LTM gives telcos full transparency into how it was trained and what data was used, enabling secure and fast on‑premises deployment within their networks, where they can build and run agents directly. It also lets telcos safely adapt and extend telecom‑tuned reasoning with their own network and operational data, so they can move toward autonomous operations without sacrificing control over data or security.

Teaching AI Agents to Reason Like Network Engineers

NVIDIA and Tech Mahindra have published an open source guide that shows telecom operators how to fine-tune domain-specific reasoning models and build agents that can safely execute network operations center (NOC) workflows.

The guide outlines a framework for teaching models to reason like NOC engineers: focus on high‑impact, high‑frequency incident categories, translate expert resolutions into step‑by‑step procedures and turn those into structured reasoning traces that capture each action, tool call, outcome and decision. These traces become the “thinking examples” the model learns from, so it understands not just what to do, but why a particular sequence of checks and fixes is safe and effective.

Using the NVIDIA NeMo-Skills pipeline, operators can fine-tune a reasoning model on these traces, laying the foundation for telco-specialized AI agents that can reason and solve problems like a network engineer.

Maximizing Energy Efficiency With New Intent-Driven Energy Saving Blueprint

Autonomous networks rely on closed‑loop operation: models that understand the network, agents that act on intent and simulation that feeds results back into the system to validate and refine decisions. The new NVIDIA Blueprint for intent-driven RAN energy efficiency brings these pieces together, helping operators systematically reduce power consumption in 5G radio access networks (RAN) while maintaining quality of service.

The blueprint integrates network test and measurement leader VIAVI’s TeraVM AI RAN Scenario Generator (AI RSG) platform to generate synthetic network data — including cell utilization, user throughput and other traffic patterns — and convert it into a simple, queryable format.

An energy planning agent then reasons over the synthetic data to generate energy-saving policies that can be simulated in AI RSG, allowing operators to safely validate energy-saving policies in a closed loop to meet their intent without changing live configurations or impacting subscribers.

Telcos Put the NVIDIA Blueprint for Network Configuration to Work

The NVIDIA Blueprint for telco network configuration is being adopted by operators around the world.

Cassava Technologies is using the blueprint to build Cassava Autonomous Network, an agentic platform designed to optimize Africa’s diverse, multi-vendor mobile network environment. The platform implements three agents: one to monitor the network and recommend configuration changes, one to apply changes with documentation and governance, and one to assess the impact of changes made and safely roll them back if they have unintended effects.

NTT DATA is implementing the blueprint to bring intelligence to traffic regulation, helping the network manage surges when users reconnect after an outage, and is deploying it with a tier 1 operator in Japan.

An AI agent looks at real-time demand across the network and then decides when and how to admit new users on specific cells. As conditions stabilize, the agent adapts its decisions, turning what used to be manual configurations into a data-driven optimization cycle for more resilient mobile networks.

Evolving Network Configuration With Multi-Agent Orchestration

To help telcos design, observe and optimize complex agentic workflows across the RAN, NVIDIA and BubbleRAN are enhancing the NVIDIA Blueprint for telco network configuration with NVIDIA NeMo Agent Toolkit (NAT) and BubbleRAN Agentic Toolkit (BAT), complementary frameworks for multi-agent orchestration.

BubbleRAN is integrating NAT and BAT into its Opti-Sphere platform to manage network monitoring, configuration and validation agents more flexibly across containers and workloads, and connect them to tools that report network metrics and traffic status so they can continuously propose and validate configuration changes.

Telenor Group will be the first telco to adopt the blueprint with BubbleRAN to enhance its 5G network for Telenor Maritime, the group’s global connectivity provider at sea.

Learn more about the latest advancements in agentic AI for telecommunications at Mobile World Congress, taking place in Barcelona from March 2-5. 

See notice regarding software product information.

The 2024 Game Awards’ biggest Game of the Year snubs and surprises


The Game Awards aren’t known for much in the way of shock and surprise, and so it proved with the 2024 nominees — a fairly well-rounded list in which most of the year’s best-reviewed and best-loved games got some love.

The Game Awards’ voting jury snubbed BioWare’s latest game in a series of key categories where it would have been expected to compete. It secured just one nomination, for Innovation in Accessibility, which is decided by a specialist jury.

Granted, the reception to The Veilguard has been mixed — and with its Metacritic rating settling at 82, a nomination for Game of the Year seemed beyond its reach (even though that is one point higher than Black Myth: Wukong, which did make the cut).

More tellingly, though, The Veilguard did not score nominations for Best Narrative or Best Performance, two areas where BioWare games tend to excel, and which are less review-dependent. It also missed out in Best Role-Playing Game. This was an exceptionally strong category this year: Three of the five nominees (Metaphor: ReFantazio, Final Fantasy 7 Rebirth, and Elden Ring: Shadow of the Erdtree) also secured nominations for Game of the Year, and the other two (Dragon’s Dogma 2 and Like a Dragon: Infinite Wealth) are unconventionally excellent. Even so, failing to join this company is surely not the result that BioWare or publisher EA wanted after a decade of development.

Were there any other snubs? Perhaps a few minor ones. It was a surprise not to see the much-loved EA Sports College Football 25 score a nomination in the Sports/Racing Game category, although this might be down to the broad international makeup of the jury. The Sim/Strategy Game category is missing two games with passionate fan bases and high review scores — Satisfactory and Tactical Breach Wizards — either of which might have taken the slot of the Age of Mythology remake, for example. But this was a strong category this year. Personally, I would have loved to see The Legend of Zelda: Echoes of Wisdom nominated for its fabulous music.

As ever, the intensely competitive indie categories cannot please everyone. But with 15 games nominated across Independent Game, Debut Indie Game, and Games for Impact, you have to dig down to some pretty deep cuts like Arco or 1000xResist before you find something to get upset about.

Other surprises? I don’t think anybody saw four nominations coming for Senua’s Saga: Hellblade 2, a game that has all but disappeared from the discourse since its release in May. Four nominations — including Game of the Year — for DLC, in the form of Shadow of the Erdtree, is without precedent. Black Myth: Wukong breaking through in Game of the Year despite its comparatively weak critical reputation is definitely noteworthy, as are the five nominations for one-man-band card game Balatro.

In the end, though, there’s not much in this set of nominees to ruffle any feathers — outside of BioWare’s offices, that is.

Life Is Strange: Double Exposure is more of a puzzle game than I expected


Life Is Strange: Double Exposure simultaneously serves as a welcoming return and an exciting leap forward, as fan-favorite protagonist Max Caulfield steps back into the spotlight with new friends, a fresh mystery, and reality-bending abilities. I took the game for a spin during Gamescom and the demo revealed, to my surprise, that Double Exposure may be the series’ most mechanically intriguing entry yet.

With the game set a decade after the events of the original Life Is Strange, the now-adult Max has left Arcadia Bay and works as an artist-in-residence at Caledon University in upstate Vermont. She’s formed a new friend circle in Moses, a science enthusiast, and Safi, daughter of the university’s president. Since the cataclysmic events at Arcadia Bay, of which both endings will funnel into this narrative, Max has sworn never to use her time-rewind power again. However, her new peace becomes shattered when Safi is mysteriously murdered, prompting Max to attempt to save her by winding back the clock for the first time in years. For reasons unknown, the lengthy period of inactivity has caused Max’s power to evolve, and she manages to tear through the fabric of time and space to access an alternate timeline where Safi still lives but remains in mortal danger. Thus, Double Exposure becomes a double murder mystery with players utilizing Max’s newfound Shift power to jump between timelines to discover the identity of the killer in one reality while preventing Safi’s murder in the other.

The Gamescom demo takes place shortly after Safi’s murder. I won’t spoil the narrative details, but Max must retrieve Safi’s camera from a classroom while avoiding detection by a snooping detective. While the room is locked in her current timeline, the same may not be true in the alternate reality. Keeping track of which timeline you occupy is easy thanks to an icon in the upper-left corner labeling the reality as “Living” or “Dead,” referencing Safi’s fate in that world. Using Max’s Pulse ability, another new trick that lets her detect and reveal ghostly elements from the other timeline without doing a full swap, I find a glowing weak point between realities where switching timelines becomes possible. Making the jump sees Max pull apart the current reality like she’s opening a pair of curtains to instantaneously cross over to the other side. The snappiness of this transition makes for a cool visual.

Getting my hands on Safi’s camera becomes an involved exercise in exploring the two-story room, finding clues and hitting dead ends that can only be circumvented by switching to the other timeline. Elements such as the room’s layout, the characters’ current activities and moods, and the location of important items differ in each timeline, and the crux of puzzle-solving involves figuring out how gathering information in one world answers a question in the opposite one.

What begins as a simple search for a safe spirals into using an astronomy chart to find a vital constellation referenced by Moses, then activating a projector to overlay a star chart on a classroom mural in such a manner that the orientation of the constellation reveals the hidden location of the safe’s item. Solving this single puzzle requires several timeline shifts to unravel smaller riddles that logically build toward the solution.

Upon solving this puzzle, the detective forces his way into the classroom, triggering a stealth sequence where I need to escape the room undetected. Simply sneaking past him isn’t enough; I need a loud object to create a distraction, and it can only be found in the Living reality. Since the patrolling investigator blocks certain routes in the cluttered, box-ridden room, getting past him requires a few strategic uses of Shift, as he’s not present in the Living timeline.

While Double Exposure seems to test your noodle more than previous entries, it still heavily emphasizes managing character relationships and steering the story through dialogue choices. However, timeline hopping adds some spice to this formula. While a character may be hesitant to reveal a crucial personal secret in one timeline, their counterpart may be more forthcoming, offering information that can give Max the upper hand. Resorting to using knowledge Max technically shouldn’t possess may not go over well, though, adding a thoughtful wrinkle to conversations.

The Double Exposure Gamescom demo sold me on Shift as a fun mechanic, and I’m excited to see how the game further leverages it to tell its tale. Tack on the return of Max and I’m itching to see how this multiversal murder mystery unravels.

Marvel shows footage from Thunderbolts*, the MCU Suicide Squad, at SDCC


When Marvel Studios first announced Thunderbolts* at 2022’s San Diego Comic-Con as part of its ambitious lineup for Phase 5 and 6 of the Marvel Cinematic Universe franchise, the movie didn’t yet have that odd asterisk in the title. It didn’t come with many details, either, apart from a July 26, 2024 release date that shifted along with many other MCU projects in the wake of the 2023 WGA strike.

In the wake of the Thunderbolts* segment of 2024’s San Diego Comic-Con, we don’t know much more! The asterisk is still a mystery: Marvel Studios President Kevin Feige said at a CinemaCon appearance, “we won’t talk more about that until after the movie comes out,” and confirmed it again at Comic-Con.

But as the core cast of Thunderbolts* took the stage, the Hall H audience was treated to a teaser in which all their characters came under fire from a mysterious foe who, according to Florence Pugh’s Yelena Belova, wants them all dead.

Traditionally in Marvel Comics, the Thunderbolts are a team-up of second-string villains or anti-heroes, though their membership and motives vary significantly depending which iteration you’re talking about. The MCU team is built of not-exactly-always-good characters introduced in previous films in the franchise: Ghost (Hannah John-Kamen, of Ant-Man and the Wasp), Red Guardian (David Harbour, Black Widow), the Winter Soldier (Sebastian Stan, the Captain America movies), U.S. Agent, aka John Walker (Wyatt Russell, Falcon and the Winter Soldier), and Taskmaster (Olga Kurylenko, Black Widow). Pugh’s Yelena, from Black Widow and Hawkeye, leads the team, with slimy mastermind Valentina Allegra de Fontaine (Julia Louis-Dreyfus, Falcon and the Winter Soldier) behind the scenes.

Who might want all those folks dead? What might those folks do to stay alive? And what the heck is that asterisk about after all? We’ll have to wait for the theatrical debut of Thunderbolts* on May 2, 2025, as the final movie in the MCU’s Phase 5.

You can find all Polygon’s coverage of SDCC 2024 news, trailers, and more here.