Visual Studio Community 17.14.15 Angular import suggested by Copilot is wrong


A few version ago, there was a perfect auto completion for Angular import, but now Visual Studio Community uses Copilot which gives incorrect suggestions.

The suggestions I have for CommonModule are:

enter image description here

When the correct answer is:

import { CommonModule } from '@angular/common'

How to fix this wrong suggestion generated by Copilot?

Is there a correct alternative to go back to previous suggestion tool?
Or do you have an extension which will suggest correct answer?

OnePlus 15 Could Set New Android Performance Standard with Snapdragon 8 Elite Gen 5


A leaked Weibo image of the OnePlus 15 camera.

Credit: Weibo

The OnePlus 15 has appeared in a fresh Geekbench listing. The device, listed under the model number PLK110, recorded a single-core score of 3,709 and a multi-core score of 11,000, Phone Arena reported Friday.

This test also confirms that the OnePlus 15 will have Qualcomm’s Snapdragon 8 Elite Gen 5 processor, running at 4.61 GHz, paired with 16GB of RAM and Android 16. The chipset is also expected to be inside other late-2025 flagships such as the Xiaomi 17, which posted nearly identical scores in earlier tests.

Apple’s iPhone 17 Pro Max with the A19 Pro has already appeared on Geekbench, scoring around 3,880 in single-core and 9,850 in multi-core performance. Apple still holds a slight edge in raw single-core output, but Qualcomm’s new chip appears to take the lead in multi-core results.

Other than the performance, the OnePlus 15 is to have a 6.78-inch 1.5K OLED display with a 165Hz refresh rate and a 7,000 mAh battery that supports 120W wired charging. The phone will likely launch in China later this year, with a US release expected in early 2026. Reports say OnePlus may cut back on some of the most expensive components to give the device a lower price point, which could make it one of the more effective Android flagships of its generation.

Facebook adds an AI assistant to its dating app


Facebook Dating has added two new AI tools, because clearly a large language model is what the search for love and companionship has been missing all this time. The social media platform introduced a chatbot called dating assistant that can help find prospective dates based on a user’s interests. In the announcing the features, the example Meta provided was “Find me a Brooklyn girl in tech.” The chatbot can also “provide dating ideas or help you level up your profile.” Dating assistant will start a gradual rollout to the Matches tab for users in the US and Canada. And surely everyone will use it in a mature, responsible, not-at-all-creepy fashion.

The other AI addition is Meet Cute, which uses a “personalized matching algorithm” to deliver a surprise candidate that it determines you might like. There’s no explanation in the blog post about how Meta’s algorithm will be assessing potential dates. If you don’t want to see who Meta’s AI thinks would be a compatible match each week, you can opt out of Meet Cute at any time. Both these features are aimed at combatting “swipe fatigue,” so if you’re 1) using Facebook, 2) using Facebook Dating, and 3) are really that tired of swiping, maybe this is the solution you need.

Everything You Need To Know About Silent Hill f


Blog | Everything You Need To Know

Silent Hill f marks the long-awaited return of Konami’s legendary horror series, bringing with it a fresh setting, unsettling atmosphere, and a creative team rooted in Japanese horror storytelling. With a new direction that promises to balance psychological dread and survival tension, it’s already shaping up to be one of the most anticipated releases of 2025. If you’re curious about what to expect, we’ve rounded up everything you need to know about Silent Hill f — including its release date, gameplay, story, editions, pre-order bonuses, and the latest trailer.

Silent Hill f Release Date

Silent Hill f will launch worldwide on September 25, 2025. The game will be available on PlayStation 5, Xbox Series X|S, and PC via Steam and Epic Games Store. At this stage, no Nintendo Switch 2 version has been announced.

Silent Hill f Pre-Order Bonuses

Players who pre-order Silent Hill f will gain access to exclusive bonuses depending on the edition chosen. Regardless of which edition they choose, placing a pre-order nets them the following:

  • White Sailor School Uniform – a cosmetic upgrade that changes Hinako’s appearance
  • Omamori: Peony – an equipable item unlocked with progression
  • Item Pack – an item pack that contains three consumables and is comprised of one Shrivelled Abura-age, one Divine Water and one First Aid Kit

Additionally, pre-ordering the Digital Deluxe Edition grants up to 48 hours of early access to the game before its official release date.

Silent Hill f Special Editions

Alongside the Standard Edition, Silent Hill f will also launch with a Digital Deluxe Edition, offering additional content for dedicated fans:

Silent Hill f Deluxe Edition

  • Base game of Silent Hill f
  • 48-Hour Early Access
  • Digital Artbook
  • Digital Soundtrack
  • Pink Rabbit Costume

Silent Hill f Gameplay

As Hinako Shimizu, players will explore the mist-covered town of Ebisugaoka, a place where everyday life slowly unravels into something far darker. Exploration is at the heart of the experience, with narrow streets, abandoned interiors, and unsettling details that reward those who look closely.

Fighting to survive feels desperate and personal. Instead of a stockpile of guns, Hinako relies on improvised weapons like spears and blades, making every encounter tense. Sometimes, stealth or simply running away is the smarter choice, pushing players to weigh risk against reward.

Puzzles carry the same uneasy atmosphere, often pulling from Japanese folklore and psychological themes to keep the tension high. Decisions along the way influence how Hinako’s story unfolds, leading to different outcomes that reflect her struggle.

Layered on top of it all is the chilling soundscape and grotesque creature design, which ensures that dread never truly lifts.

Silent Hill f Story

Unlike previous Silent Hill entries set in America, Silent Hill f takes place in 1960s Japan (more on that here), in the fictional rural town of Ebisugaoka. The protagonist is Hinako Shimizu, a high school student whose quiet life is disrupted when the fog creeps in and the world begins to twist into a nightmarish reflection of itself.

The narrative, penned by celebrated writer Ryukishi07 (Higurashi, Umineko), explores psychological and societal horror, intertwining the beautiful with the grotesque. Themes of fear, social pressure, and transformation form the backbone of this chilling tale.

PC Specifications

Silent Hill f is being designed to deliver its haunting atmosphere with cutting-edge visuals and immersive audio, and Konami has released the official PC specifications to help players prepare their systems ahead of launch:

MINIMUM:

  • Requires a 64-bit processor and operating system
  • OS: Windows® 11 (64-bit OS required)
  • Processor: Intel Core i5-8400 or AMD Ryzen 5 2600
  • Memory: 16 GB RAM
  • Graphics: NVIDIA GeForce GTX 1070 Ti or AMD Radeon RX 5700
  • DirectX: Version 12
  • Storage: 50 GB available space
  • Additional Notes: Playing on minimum requirements should enable you to play on Performance quality settings in 30 at 720p. SSD is recommended.

RECOMMENDED:

  • Requires a 64-bit processor and operating system
  • OS: Windows® 11 (64-bit OS required)
  • Processor: Intel Core i7-9700 or AMD Ryzen 5 5500
  • Memory: 16 GB RAM
  • Graphics: NVIDIA GeForce RTX 2080 or AMD Radeon RX 6800 XT
  • DirectX: Version 12
  • Storage: 50 GB available space
  • Additional Notes: Playing on recommended requirements should enable you to play on Performance settings in 60 FPS or Quality settings in 30 FPS at FullHD (or 4k using DLSS or similar technology). SSD required.

Where Can I Watch the Latest Trailer?

The most recent trailer, revealed during gamescom Opening Night Live, showcases both the atmosphere and the terror at the heart of Silent Hill f. Opening with what appears to be a crime scene and ominous radio chatter, it quickly spirals into haunting imagery and a chilling sense of dread. It sets the stage for the unsettling journey players can expect when the game launches.


Jason Coles

Jason likes to focus on roguelikes and co-op games; in a dream world he’d make a living writing about Dark Souls. As well as being a writer he also does personal training and accounting and can occasionally be seen on other people’s streams. Being a big fan of fluffy things means he has two cats, both of whom refuse to let him sleep, but at least they are cute.

Enhancements to XAML Live Preview in Visual Studio for .NET MAUI


The XAML Live Preview feature in Visual Studio 2022 version 17.14 introduces a significant usability improvement for .NET MAUI projects: the XAML Live Preview window is now available during design time, eliminating the requirement to initiate a debug session. This change streamlines the UI development workflow for .NET MAUI applications.

Design-Time Availability

Previously, XAML Live Preview was only accessible while debugging. With this release, you can open the XAML Live Preview window directly during design time. This allows you to see changes in your app UI in real-time. You can also use Hot Reload and other live UI tools in this way, ensuring a seamless workflow, improving iteration speed, and reducing context switching.

To open it, go to Debug > Windows > XAML Live Preview.

The XAML Live Preview window supports:

  • Element selection for navigating to source XAML.
  • Zoom and ruler tools for layout inspection.
  • Docking within the IDE for persistent visibility.

.NET MAUI Support for Android Targets

In addition to Windows, XAML Live Preview also supports rendering of Android devices and emulators. This allows developers to validate UI changes across platforms, providing a high-fidelity design-time experience that ensures consistency across devices.

GitHub Copilot and Vision Support

Take your UI design to the next level with XAML Live Preview and GitHub Copilot, supporting both manual and AI-assisted XAML authoring. With Copilot Vision, you can attach a reference image of a desired UI layout, and Copilot will generate the corresponding XAML. Live Preview reflects these changes in real time, enabling rapid prototyping and refinement.

Availability and Getting Started

This feature is available in:

  • Visual Studio 2022 version 17.14
  • Visual Studio 2026 Insiders release

To begin using XAML Live Preview and explore its capabilities, refer to the official XAML Live Preview Documentation.

God’s Purpose in Our Punishment


God is a very loving God.

He loves us all so much, that He sent His own son to die for us.

To experience horrible pain that we would never know, so that we could be fully reconciled to Him.

And while He IS 100% loving, He is also 100% a righteous judge who doesn’t want His precious children to sin.

God’s Purpose in Our Punishment

This means that He does not let sin go unpunished.

My ex-husband abandoned us, left me and the kids homeless, and went off and had six affairs, one of which with another married woman.

Whether my ex-husband is punished in this life or the one to come, he WILL be punished for his sin if he doesn’t give it all over to God and accept Him as Lord and Savior.

We can take comfort in knowing that the people who hurt us WILL have to answer to God.

But the question I want to pose to you today, is about God’s punishment to US. What happens when WE sin? I think that a lot of people don’t want to think about that. I know in my own life, I’d rather not think about my own sin. Although I know it’s there, often times, I try to hide it or pretend it’s not there. But the Bible is clear. He punishes His children when we disobey.

Sometimes when I punish my children, I think of God. I think of how I really HATE to spank them. It has moved me to tears at times when I have to punish them. I really cringe, because I don’t want to hurt them, and yet I KNOW, that I must punish. I must not allow them to get away with sin and disobedience. How can I provide an accurate picture of Christ if I let them get away with things they ought not to? How can I answer to the God of the universe that I allowed them to knowingly sin and have no consequences. I am a steward, now get this..FOR God OF them.

He put me in charge of them and I must do my duty as a Christian mother to do as I feel best and to teach them right from wrong. Being a mother is hard work. It’s not for the faint of spirits. It is day in, day out, constant sacrifice for the life of another. And that is, in a way, a picture of what Christ did for us. He sacrificed for us, to His own death.

If God is who He says He is (and He is), then He must punish His children. He has to if He is just and righteous. He simply cannot let sin go unpunished.

For a non-Christian, they will be judged and Jesus’ blood would not cover their sins and their punishment is a literal Hell. There are indeed, way too many people in my life on this side of His judgement, and I’m sure that you know many as well.

But for the Christian, this is much different.

Although we sin, and we are punished for our sins if we do not truly repent, there is no Hell for us. Jesus’ blood covers our sins and God remembers them no more. They are washed away with Jesus’ blood.

Does that mean that we do not have any consequences? What happens when a believer sins and truly repents? The truth is that, there STILL might be punishment. The natural course and laws of man are still present.

Let’s say a Christian gets so angry, they physically murder someone. They immediately repent, call on God, they truly ARE sorry, but the police show up to arrest them. What they have done was wrong, it was sin. Although they are completely forgiven by God, they still have to answer to the laws of man. Things are still set in motion here on Earth that may include punishment in the here and now.

This is the beginning of the fear of the Lord. Not that we should cower before Him, but that we should have the respect, love, and fear of the Lord, knowing that if we sin, we will be punished. It is mocking God to believe that sin goes unpunished, and living our lives according to that fallacy.

I think of my own life so much here as I write this to you. Again, my ex-husband has done some horrible things, even according to the world’s standards. And I look at him and it seems like he’s getting away with murder. It seems like there are no signs of punishment for him and what he’s done to our family. He’s hurt so many people and he continues to have no solemn thought about it all, and yet, there he stands. Still living life in a wicked way, openly mocking God.

And for me, here I am, a devout Christian woman, though not perfect, truly saved, truly trying her best to put all the pieces back together and to be strong for my kids and many of you, and yet, I am punished. Though I have a great life, a miraculously wonderful life, I still suffer. And I’m tempted many times to tell God it’s not fair. Why do I suffer while he is free to live in sin, openly mocking God?

Do you ever feel like that? Like you see someone else in your life, just living a horrible life seemingly getting away with it all?

The passage I always come back to when I feel like that is Scripture Psalm 37. It is my favorite passage in the Bible, mostly because of verse 23 where it says He will guide us and lead our steps. But, in Bible verse 1-2, it plainly says,

Do not fret because of those who are evil
    or be envious of those who do wrong;
for like the grass they will soon wither,
    like green plants they will soon die away.

 

When I am telling God it’s not fair, I am in sin. The Bible clearly says, DO NOT FRET. Do not worry. Don’t think about it. Don’t worry about it. God has it. He’s got it all under control.

Often times, we are so quick to be thankful for God’s punishment when it comes to others, myself included, but when it comes to our own punishment, we tend to gloss over that.

When I WAS married, seeing my husband’s sin so rampant, one of the things that got me through was to just simply focus on my own sin.

To not worry myself about his.

He stood before God for his own self, and it was all I could do to be a godly woman just to think about my own sin and not to focus on his.

This week, I have not been taking that advice and so I ask for your prayer that I will. That I will just stop thinking about the sin of another and focus on my own sin. Focus on how I can be a more godly woman. Focus on how I can please the Lord. What I can do to improve.

If we are to be godly women, we must not focus on the evil of another. We must simply focus on us, how WE stand before the Lord.

Although God loves us more than anything or anyone else in the universe, He also loves us enough not to let us get away with sin.

Is there anything that you’ve felt wrong about that you’ve tried to hide or gloss over? Is there anything there that stands in the way of you being completely reconciled to Christ?

If there is, I urge you my sister, to bring it to the Lord in prayer. I will be praying for everyone who reads this as well. 🙂 You are not alone in your endeavor to please the Lord, and believe me, I am right there with you. We ALL sin. We all fall short of the glory of God.

An AI musician just got a multi-million dollar record deal


gettyimages-2211085365

Malte Mueller/Getty Images

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • An AI-generated artist got a $3 million record deal.
  • A human artist uses AI platform Suno to generate the persona.
  • Creative industries continue to sue AI companies.

An AI-generated musician persona run by a human R&B artist has received a $3 million record deal — amidst several lawsuits targeting AI companies encroaching on creative industries. 

Also: Will AI damage human creativity? Most Americans say yes

Telisha “Nikki” Jones, who’s behind the AI-generated artist Xania Monet, accepted the record deal, which is with Hallwood Media. She combines elements of her real-life songwriting abilities with AI-generated vocals, images, and musical production. 

Jones uses AI music generation startup Suno — which is currently being sued by the Recording Industry of America (RIAA) — to make Monet’s music. This scenario presents two sides of one coin: While the RIAA claims Suno stole audio from YouTube videos, bypassing legal protections, others are using the platform to achieve stardom they wouldn’t have accessed otherwise. 

AI music and musicians

According to Billboard, Jones writes all of Monet’s lyrics and takes “full ownership” of the production credits. However, Murphy admitted to Billboard that Jones is not the “vocal beast” Monet is, though it’s unclear how much Suno’s platform is responsible for Monet’s vocals. 

Also: AI songs are infiltrating Spotify – here’s why it’s an issue for fans and creators

Monet first impacted charts during the week of Sept. 20, 2025, when her song “How Was I Supposed to Know” reached No.1 on R&B Digital Song Sales, Billboard writes. The same song enjoys TikTok popularity, achieving over 80,000 posts and the 39th spot on TikTok’s Top 50 Music Chart.

On TikTok, Xania Monet’s page displays her 322,000 followers, over one million likes, and AI-generated videos of her singing in recording booths, studios, apartments, and sporting arenas. The vibrant colors, overly smoothed skin, oddly cut clips, and generally uncanny valley vibe immediately read as AI-generated to a trained eye. However, there isn’t any mention of AI on Monet’s page.

Some users in the comments wonder when the other dedicated fans will realize the artist is AI-generated, while others explain how, despite being sung by an AI artist, Monet’s lyrics deeply touch them, giving them a song that mirrors their life experiences. Others are finding out via the comments that Monet is not a real person. Put simply: Some people care, some people don’t, and others have no clue. 

Also: iOS 26 just solved one of my biggest pain points with Apple Music

According to Billboard’s interview with Murphy, Monet plans to use more human producers on her upcoming music and is planning her first live performance, though it’s unclear how. Hallwood Media has also signed a recording agreement with imoliver, Suno’s top-streaming creator.

Lawsuits against AI companies 

The use of AI in creative spaces has been an ongoing debate since the technology’s advancements began to encroach on artistic professionals. Visual artists, film directors, writers, and music artists have expressed irritation at artificial intelligence companies training their image, video, language, and voice models on existing works of art. Plenty of lawsuits have been filed — almost always on the legal basis of copyright infringement.

The RIAA’s lawsuit against Suno alleges that the platform “stream-ripped” songs from artists on YouTube to train its AI voice models, a process that involves copying existing artists’ voices and converting them into downloadable files. The suit represents labels like Warner Music Group, Universal Music Group, and Sony Music Entertainment. 

Also: AI’s not ‘reasoning’ at all – how this team debunked the industry hype

According to the RIAA’s lawsuit, stream-ripping illegally uses copyrighted material from artists like Mariah Carey and The Temptations. Perhaps that’s why some of Monet’s commenters hear a mix of existing artists’ voices in her recordings.

Elsewhere, the New York Times sued OpenAI for unpermitted use of articles to train large language models, and Disney and Universal sued Midjourney for unpermitted use of their films and characters to train image creation models. Anthropic just settled a lawsuit with three authors. 

Individual artists have also sued other AI companies for using their work to train models, and the legal opinions are moving more slowly than the technology’s development. Most of the defense lies in fair use, while artists and companies claim plagiarism and copyright infringement.

Why it matters 

If AI can land a record deal, who profits?

At the very least, Jones is a public musician behind Monet’s persona. But an R&B star whose voice, image, and promotional material are AI-generated draws comparisons to a popular AI-generated (and somewhat less transparent) influencer, Lil Miquela.

Also: Will AI think like humans? We’re not even close – and we’re asking the wrong question

Like Xania Monet, Lil Miquela signed a multi-million dollar deal with a talent agency, and there are real people behind her digital persona. The studio behind Lil Miquela and its investors receives the money Lil Miquela is paid from brand partnerships and advertisements. But many questions — and few answers — arise from asking the same about Monet.

Surely, Jones, her manager, producers, and other studio personnel get a cut of her $3 million record deal, but could Suno or the AI  platform responsible for Monet’s digital appearance demand a cut? Technically, Suno and other AI platforms created Monet’s voice and likeness, so how, if at all, do they fit into her moneymaking success?

To some, the AI-generated artist’s record deal indicates innovation in the music industry, giving opportunities to people without connections, wealth, or even conventional beauty. To others, like popular R&B artist Kehlani, it takes away from the real people who dedicate their lives and talents to their art. The artist expressed disdain for AI-generated music and for offering record deals to people who use it.

Also: Stability’s new AI audio tool creates custom sound for brands – how it works

It’ll be interesting to see how far Monet’s career as an AI-generated music artist will go without media appearances and live performances, and how developed the track recordings can be within the technology’s confines. 

The “AI in art” conversation also forces companies to reckon with how much they want to incorporate AI into their business models. Although the total use of AI to write articles, create artwork, or generate short motion pictures is considered taboo, is altogether omitting AI from creative practices an antiquated train of thought? 

Or, does the debate boil down to who uses AI within their creative practices? Perhaps the issue is that publishers, artists, studios, and record labels should have the option to lend their existing work to AI models, instead of having no say.

The legal road to using generative AI in creative practices has already been long, and there are no signs of it shortening anytime soon.



Top AI Infrastructure Companies | Comprehensive Comparison Guide


Top AI infrastructure company

Top AI Infrastructure Companies: A Comprehensive Comparison Guide

Artificial intelligence (AI) is no longer just a buzzword; many businesses are struggling to scale models because they lack the right infrastructure. AI infrastructure comprises technologies for computing, data management, networking, and orchestration that work together to train, deploy, and serve models. In this guide, we’ll explore the market, compare top AI infrastructure companies, and highlight new trends that will transform computing. Understanding this space will empower you to make better decisions whether you’re building a startup or modernizing your operations.

Quick Summary: What Will You Learn in This Guide?

  • What is AI infrastructure? A specialized technology stack—including computation, data, platform services, networking, and governance—that supports model training and inference.
  • Why should you care? The market is growing rapidly, projected from $23.5 billion in 2021 to over $309 billion by 2031. Businesses spend billions on specialist chips, GPU data centers, and MLOps platforms.
  • Who are the leaders? Major cloud platforms like AWS, Google Cloud, and Azure dominate, while hardware giants NVIDIA and AMD produce cutting-edge GPUs. Rising players like CoreWeave and Lambda Labs offer affordable GPU clouds.
  • How to choose? Consider computational power, cost transparency, latency, energy efficiency, security, and ecosystem support. Sustainability matters—training GPT-3 consumed 1,287 MWh of electricity and released 552 tons of CO₂.
  • Clarifai’s view: Clarifai helps businesses manage data, run models, and deploy them across cloud and edge contexts. It offers local runners and managed inference for quick iteration with cost control and compliance.

What Is AI Infrastructure, and Why Is It Important?

What Makes AI Infrastructure Different from Traditional IT?

AI infrastructure is built for high-compute workloads like training language models and running computer vision pipelines. Traditional servers struggle with large tensor computations and high data throughput. Thus, AI systems rely on accelerators like GPUs, TPUs, and ASICs for parallel processing. Additional components include data pipelines, MLOps platforms, network fabrics, and governance frameworks, ensuring repeatability and regulatory compliance. NVIDIA CEO Jensen Huang coined AI as “the essential infrastructure of our time,” highlighting that AI workloads need a tailored stack.

Why Is an Integrated Stack Essential?

To train advanced models, teams must coordinate compute resources, storage, and orchestration across clusters. DataOps 2.0 tools handle data ingestion, cleaning, labeling, and versioning. After training, inference services must respond quickly. Without a unified stack, teams face bottlenecks, hidden costs, and security issues. A survey by the AI Infrastructure Alliance shows only 5–10 % of businesses have generative AI in production due to complexity. Adopting a full AI-optimized stack enables organizations to accelerate deployment, reduce costs, and maintain compliance.

Expert Opinions

  • New architectures matter: Bessemer Venture Partners notes that state-space models and Mixture-of-Experts architectures lower compute requirements while preserving accuracy.
  • Next-generation GPUs and algorithms: Devices like NVIDIA H100/B100 and techniques such as Ring Attention and KV-cache optimization dramatically speed up training.
  • DataOps & observability: As models grow, teams need robust DataOps and observability tools to manage datasets and monitor bias, drift, and latency.

What Is the Current AI Infrastructure Market Landscape?

How Big Is the Market and What’s the Growth Forecast?

The AI infrastructure market is booming. ClearML and the AI Infrastructure Alliance report it was worth $23.5 billion in 2021 and will grow to over $309 billion by 2031. Generative AI is expected to hit $98.1 billion by 2025 and $667 billion by 2030. In 2024, global cloud infrastructure spending reached $336 billion, with half of the growth attributed to AI. By 2025, cloud AI spending is projected to exceed $723 billion.

How Wide Is the Adoption Across Industries?

Generative AI adoption spans multiple sectors:

  • Healthcare (47 %)
  • Financial services (63 %)
  • Media and entertainment (69 %)

Big players are investing heavily in AI infrastructure: Microsoft plans to spend $80 billion, Alphabet up to $75 billion, Meta between $60 – 65 billion, and Amazon around $100 billion. However, 96 % of organizations intend to further expand their AI computing power, and 64 % already use generative AI—illustrating the rapid pace of adoption.

Expert Opinions

  • Enterprise embedding: By 2025, 67 % of AI spending will come from businesses integrating AI into core operations.
  • Industry valuations: Startups like CoreWeave are valued near $19 billion, reflecting a strong demand for GPU clouds.
  • Regional dynamics: North America holds 38.9 % of generative AI revenue, while Asia-Pacific experiences 47 % year-over-year growth.

How Are AI Infrastructure Providers Classified?

Compute and accelerators

The compute layer supplies raw power for AI. It includes GPUs, TPUs, AI ASICs, and emerging photonic chips. Major hardware companies like NVIDIA, AMD, Intel, and Cerebras dominate, but specialized providers—AWS Trainium/Inferentia, Groq, Etched, Tenstorrent—deliver custom chips for specific tasks. Photonic chips promise almost zero energy use in convolution operations. Later sections cover each vendor in more detail.

Cloud & hyperscale platforms

Major hyperscalers provide all-in-one stacks that combine computing, storage, and AI services. AWS, Google Cloud, Microsoft Azure, IBM, and Oracle offer managed training, pre-built foundation models, and bespoke chips. Regional clouds like Alibaba and Tencent serve local markets. These platforms attract enterprises seeking security, global availability, and automated deployment.

AI‑native cloud start‑ups

New entrants such as CoreWeave, Lambda Labs, Together AI, and Voltage Park focus on GPU-rich clusters optimized for AI workloads. They offer on-demand pricing, transparent billing, and quick scaling without the overhead of general-purpose clouds. Some, like Groq and Tenstorrent, create dedicated chips for ultra-low-latency inference.

DataOps, observability & orchestration

DataOps 2.0 platforms handle data ingestion, classification, versioning, and governance. Tools like Databricks, MLflow, ClearML, and Hugging Face provide training pipelines and model registries. Observability services (e.g., Arize AI, WhyLabs, Credo AI) monitor performance, bias, and drift. Frameworks like LangChain, LlamaIndex, Modal, and Foundry enable developers to link models and agents for complex tasks. These layers are essential for deploying AI in real-world environments.

Expert Opinions

  • Modular stacks: Bessemer points out that the AI infrastructure stack is increasingly modular—different providers cover compute, deployment, data management, observability, and orchestration.
  • Hybrid deployments: Organizations leverage cloud, hybrid, and on-prem deployments to balance cost, performance, and data sovereignty.
  • Governance importance: Governance is now seen as central, covering security, compliance, and ethics.

AI Infrastructure Stack


Who Are the Top AI Infrastructure Companies?

Clarifai:

Clarifai stands out in the LLMOps + Inference Orchestration + Data/MLOps space, serving as an AI control plane. It links data, models, and compute across cloud, VPC, and edge environments—unlike hyperscale clouds that focus primarily on raw compute. Clarifai’s key strengths include:

  • Compute orchestration that routes workloads to the best-fit GPUs or specialized processors across clouds or on-premises.
  • Autoscaling inference endpoints and Local Runners for air-gapped or low-latency deployments, enabling rapid deployment with predictable costs.
  • Integration of data labeling, vector search, retrieval-augmented generation (RAG), finetuning, and evaluation into one governed workflow—eliminating brittle glue code.
  • Enterprise governance with approvals, audit logs, and role-based access control to ensure compliance and traceability.
  • A multi-cloud and on-prem strategy to reduce total cost and prevent vendor lock-in.

For organizations seeking both control and scale, Clarifai becomes the infrastructure backbone—reducing the total cost of ownership and ensuring consistency from lab to production.

Clarifai - Ai infrastructure

Amazon Web Services:

AWS excels at AI infrastructure. SageMaker simplifies model training, tuning, deployment, and monitoring. Bedrock provides APIs to both proprietary and open foundation models. Custom chips like Trainium (training) and Inferentia (inference) offer excellent price-performance. Nova, a family of generative models, and Graviton processors for general compute add versatility. The global network of AWS data centers ensures low-latency access and regulatory compliance.

Expert Opinions

  • Accelerators: AWS’s Trainium chips deliver up to 30 % better price-performance than comparable GPUs.
  • Bedrock’s flexibility: Integration with open-source frameworks lets developers fine-tune models without worrying about infrastructure.
  • Serverless inference: AWS supports serverless inference endpoints, reducing costs for applications with bursty traffic.

Google Cloud’s AI:

At Google Cloud, Vertex AI anchors the AI stack—managing training, tuning, and deployment. TPUs accelerate training for large models such as Gemini and PaLM. Vertex integrates with BigQuery, Dataproc, and Datastore for seamless data ingestion and management, and supports pre-built pipelines.

Insights from Experts

  • TPU advantage: TPUs handle matrix multiplication efficiently, ideal for transformer models.
  • Data fabric: Integration with Google’s data tools ensures seamless operations.
  • Open models: Google releases models like Gemini to encourage collaboration while leveraging its compute infrastructure.

Microsoft Azure AI

Microsoft Azure AI offers AI services through Azure Machine Learning, Azure OpenAI Service, and Foundry. Users can choose from NVIDIA GPUs, B200 GPUs, and NP-series instances. The Foundry marketplace introduces a real-time compute market and multi-agent orchestration. Responsible AI tools help developers evaluate fairness and interpretability.

Experts Highlight

  • Deep integration: Azure aligns closely with Microsoft productivity tools and offers robust identity and security.
  • Partner ecosystem: Collaboration with OpenAI and Databricks enhances its capabilities.
  • Innovation in Foundry: Real-time compute markets and multi-agent orchestration show Azure’s move beyond traditional cloud resources.

IBM Watsonx and Oracle Cloud Infrastructure

IBM Watsonx offers capabilities for building, governing, and deploying AI across hybrid clouds. It provides a model library, data storage, and governance layer to manage the lifecycle and compliance. Oracle Cloud Infrastructure delivers AI-enabled databases, high-performance computing, and transparent pricing.

Expert Opinions

  • Hybrid focus: IBM is strong in hybrid and on-prem solutions—suitable for regulated industries.
  • Governance: Watsonx emphasizes governance and responsible AI, appealing to compliance-driven sectors.
  • Integrated data: OCI ties AI services directly to its autonomous database, reducing latency and data movement.

What About Regional Cloud and Edge Providers?

Alibaba Cloud and Tencent Cloud offer AI chips such as Hanguang and NeuroPilot, tailored to local rules and languages in Asia-Pacific. Edge providers like Akamai and Fastly enable low-latency inference at network edges, essential for IoT and real-time analytics.


Which Companies Lead in Hardware and Chip Innovation?

How Does NVIDIA Maintain Its Performance Leadership?

NVIDIA leads the market with its H100, B100, and upcoming Blackwell GPUs. These chips power many generative AI models and data centers. DGX systems bundle GPUs, networking, and software for optimized performance. Features such as tensor cores, NVLink, and fine-grained compute partitioning support high-throughput parallelism and better utilization.

Expert Advice

  • Performance gains: The H100 significantly outperforms the previous generation, offering more performance per watt and higher memory bandwidth.
  • Ecosystem strength: NVIDIA’s CUDA and cuDNN are foundations for many deep-learning frameworks.
  • Plug-and-play clusters: DGX-SuperPODs allow enterprises to rapidly deploy supercomputing clusters.

What Are AMD and Intel Doing?

AMD competes with MI300X and MI400 GPUs, focusing on high-bandwidth memory and cost efficiency. Intel develops Gaudi accelerators and Habana Labs technology while integrating AI features into Xeon processors.

Expert Insights

  • Cost-effective performance: AMD’s GPUs often deliver excellent price-performance, especially for inference workloads.
  • Gaudi’s unique design: Intel uses specialized interconnects to speed tensor operations.
  • CPU-level AI: Integrating AI acceleration into CPUs benefits edge and mid-scale workloads.

Who Are the Specialized Chip Innovators?

  • AWS Trainium/Inferentia lowers cost per FLOP and energy use for training and inference.
  • Cerebras Systems produces the Wafer-Scale Engine (WSE), boasting 850 k AI cores.
  • Groq designs chips for ultra-low-latency inference, ideal for real-time applications like autonomous vehicles.
  • Etched builds the Sohu ASIC for transformer inference, dramatically improving energy efficiency.
  • Tenstorrent employs RISC-V cores and is building decentralized data centers.
  • Photonic chip makers like Lightmatter use light to conduct convolution with almost no energy.

Expert Perspectives

  • Diversifying hardware: The rise of specialized chips signals a move toward task-specific hardware.
  • Energy efficiency: Photonic and transformer-specific chips cut power consumption dramatically.
  • Emerging vendors: Companies like Groq, Tenstorrent, and Lightmatter show that tech giants are not the only ones who can innovate.

Which Startups and Data Center Providers Are Shaping AI Infrastructure?

What Is CoreWeave’s Value Proposition?

CoreWeave evolved from cryptocurrency mining to become a prominent GPU cloud provider. It provides on-demand access to NVIDIA’s latest Blackwell and RTX PRO GPUs, coupled with high-performance InfiniBand networking. Pricing can be up to 80 % lower than traditional clouds, making it popular with startups and labs.

Expert Advice

  • Scale advantage: CoreWeave manages hundreds of thousands of GPUs and is expanding data centers with $6 billion in funding.
  • Transparent pricing: Customers can clearly see costs and reserve capacity for guaranteed availability.
  • Enterprise partnerships: CoreWeave collaborates with AI labs to provide dedicated clusters for large models.

How Does Lambda Labs Stand Out?

Lambda Labs offers developer-friendly GPU clouds with 1-Click clusters and transparent pricing—A100 at $1.25/hr, H100 at $2.49/hr. It raised $480 million to build liquid-cooled data centers and earned SOC2 Type II certification.

Expert Advice

  • Transparency: Clear pricing reduces surprise fees.
  • Compliance: SOC2 and ISO certifications make Lambda appealing for regulated industries.
  • Innovation: Liquid-cooled data centers enhance energy efficiency and density.

What Do Together AI, Voltage Park, and Tenstorrent Offer?

  • Together AI is building an open-source cloud with pay-as-you-go compute.
  • Voltage Park offers clusters of H100 GPUs at competitive prices.
  • Tenstorrent integrates RISC-V cores and aims for decentralized data centers.

Expert Opinions

  • Demand drivers: The shortage of GPUs and high cloud costs drive the rise of AI data center startups.
  • Emerging names: Other players include Lightmatter, Iren, Rebellions.ai, and Rain AI.
  • Open ecosystems: Together AI fosters collaboration by releasing models and tools publicly.

AI Infrastructure Roles by Category


What About Data & MLOps Infrastructure: From DataOps 2.0 to Observability?

Why Is DataOps Critical for AI?

DataOps oversees data gathering, cleaning, transformation, labeling, and versioning. Without robust DataOps, models risk drift, bias, and reproducibility issues. In generative AI, managing millions of data points demands automated pipelines. Bessemer calls this DataOps 2.0, emphasizing that data pipelines must scale like the compute layer.

Why Is Observability Essential?

After deployment, models require continuous monitoring to catch performance degradation, bias, and security threats. Tools like Arize AI and WhyLabs track metrics and detect drift. Governance platforms like Credo AI and Aporia ensure compliance with fairness and privacy requirements. Observability grows critical as models interact with real-time data and adapt via reinforcement learning.

How Do Orchestration Frameworks Work?

LangChain, LlamaIndex, Modal, and Foundry allow developers to stitch together multiple models or services to build LLM agents, chatbots, and autonomous workflows. These frameworks manage state, context, and errors. Clarifai’s platform offers built-in workflows and compute orchestration for both local and cloud environments. With Clarifai’s Local Runners, you can train models where data resides and deploy inference on Clarifai’s managed platform for scalability and privacy.

Expert Insights

  • Production gap: Only 5–10 % of businesses have generative AI in production because DataOps and orchestration are too complex.
  • Workflow automation: Orchestration frameworks are essential as AI moves from static endpoints to agent-based applications.
  • Clarifai integration: Clarifai’s dataset management, annotations, and workflows make DataOps and MLOps accessible at scale.

What Criteria Matter When Comparing AI Infrastructure Providers?

How Important Are Compute Power and Scalability?

Having cutting-edge hardware is essential. Providers should offer latest GPUs or specialized chips (H100, B200, Trainium) and support large clusters. Compare network bandwidth (InfiniBand vs. Ethernet) and memory bandwidth because transformer models are memory-bound. Scalability depends on a provider’s ability to quickly expand capacity across regions.

Why Is Pricing Transparency Crucial?

Hidden expenses can derail projects. Many hyperscalers have complex pricing models based on compute hours, storage, and egress. AI-native clouds like CoreWeave and Lambda Labs stand out with simple pricing. Consider reserved capacity discounts, spot pricing, and serverless inference to minimize costs. Clarifai’s pay-as-you-go model auto-scales inference for cost optimization.

How Does Performance and Latency Affect Your Choice?

Performance varies across hardware generations, interconnects, and software stacks. MLPerf benchmarks offer standardized metrics. Latency matters for real-time applications (e.g., chatbots, self-driving cars). Specialized chips like Groq and Sohu achieve microsecond-level latencies. Evaluate how providers handle bursts and maintain consistent performance.

Why Focus on Sustainability and Energy Efficiency?

AI’s environmental impact is significant:

  • Data centers used 460 TWh of electricity in 2022; projected to exceed 1,050 TWh by 2026.
  • Training GPT-3 consumed 1,287 MWh and emitted 552 tons of CO₂.
  • Photonic chips offer near-zero energy convolution, and cooling accounts for considerable water use.

Choose providers committed to renewable energy, efficient cooling, and carbon offsets. Clarifai’s ability to orchestrate compute on local hardware reduces data transport and emissions.

How Does Security & Compliance Affect Decisions?

AI systems must protect sensitive data and follow regulations. Ask about SOC2, ISO 27001, and GDPR certifications. 55 % of businesses report increased cyber threats after adopting AI, and 46 % cite cybersecurity gaps. Look for providers with encryption, granular access controls, audit logging, and zero-trust architectures. Clarifai offers enterprise-grade security and on-prem deployment options.

What About Ecosystem & Integration?

Choose providers compatible with popular frameworks (PyTorch, TensorFlow, JAX), container tools (Docker, Kubernetes), and hybrid deployments. A broad partner ecosystem enhances integration. Clarifai’s API interoperates with external data sources and supports REST, gRPC, and Edge run times.

Expert Insights

  • Skills shortage: 61 % of firms lack specialists in computing; 53 % lack data scientists.
  • Capital intensity: Building full-stack AI infrastructure costs billions—only well-funded companies can compete.
  • Risk management: Investments should align with business goals and risk tolerance, as TrendForce advises.

What Is the Environmental Impact of AI Infrastructure?

How Big Are the Energy and Water Demands?

AI infrastructure consumes huge amounts of resources. Data centers used 460 TWh of electricity in 2022 and may surpass 1,050 TWh by 2026. Training GPT-3 used 1,287 MWh and emitted 552 tons of CO₂. Inference consumes five times more electricity than a typical web search. Cooling also demands around 2 liters of water per kilowatt-hour.

How Are Data Centers Adapting?

Data centers adopt energy-efficient chips, liquid cooling, and renewable power. HPE’s fanless liquid-cooled design reduces electricity and noise. Photonic chips eliminate resistance and heat. Companies like Iren and Lightmatter build data centers tied to renewable energy. The ACEEE warns that AI data centers could use 9 % of U.S. electricity by 2030, advocating for energy-per-AI-task metrics and grid-aware scheduling.

What Sustainable Practices Can Businesses Adopt?

  • Better scheduling: Run non-urgent training jobs during off-peak periods to utilize surplus renewable energy.
  • Model efficiency: Apply techniques like state-space models and Mixture-of-Experts to reduce compute needs.
  • Edge inference: Deploy models locally to reduce data center traffic and latency.
  • Monitoring & reporting: Track per-model energy use and work with providers who disclose carbon footprints.
  • Clarifai’s local runners: Train on-prem and scale inference via Clarifai’s orchestrator to cut data transfer.

Expert Opinions

  • Future grids: The ACEEE recommends aligning workloads with renewable availability.
  • Transparent metrics: Without clear metrics, companies risk overbuilding infrastructure.
  • Continuous innovation: Photonic computing, RISC-V, and dynamic scheduling are critical for sustainable AI.

Sustainability Ledger


What Are the Challenges and Future Trends in AI Infrastructure?

Why Are Compute Scalability and Memory Bottlenecks Critical?

As Moore’s Law slows, scaling compute becomes difficult. Memory bandwidth now limits transformer training. Techniques like Ring Attention and KV-cache optimization reduce compute load. Mixture-of-Experts distributes work across multiple experts, lowering memory needs. Future GPUs will feature larger caches and faster HBM.

What Drives Capital Intensity and Supply Chain Risks?

Building AI infrastructure is extremely capital-intensive. Only large tech firms and well-funded startups can build chip fabs and data centers. Geopolitical tensions and export restrictions create supply chain risks, delaying hardware and driving the need for diversified architecture and regional production.

Why Are Transparency and Explainability Important?

Stakeholders demand explainable AI, but many providers keep performance data proprietary. Openness is difficult to balance with competitive advantage. Vendors are increasingly providing white-box architectures, open benchmarks, and model cards.

How Are Specialized Hardware and Algorithms Evolving?

Emerging state-space models and transformer variants require different hardware. Startups like Etched and Groq build chips tailored for specific use cases. Photonic and quantum computing may become mainstream. Expect a diverse ecosystem with multiple specialized hardware types.

What’s the Impact of Agent-Based Models and Serverless Compute?

Agent-based architectures demand dynamic orchestration. Serverless GPU backends like Modal and Foundry allocate compute on-demand, working with multi-agent frameworks to power chatbots and autonomous workflows. This approach democratizes AI development by removing server management.

Expert Opinions

  • Goal-driven strategy: Align investments with clear business objectives and risk tolerance.
  • Infrastructure scaling: Plan for future architectures despite uncertain chip roadmaps.
  • Geopolitical awareness: Diversify suppliers and develop contingency plans to handle supply chain disruptions.

How Should Governance, Ethics, and Compliance Be Addressed?

What Does the Governance Layer Involve?

Governance covers security, privacy, ethics, and regulatory compliance. AI providers must implement encryption, access controls, and audit trails. Frameworks like SOC2, ISO 27001, FedRAMP, and the EU AI Act ensure legal adherence. Governance also demands ethical considerations—avoiding bias, ensuring transparency, and respecting user rights.

How Do You Manage Compliance and Risk?

Perform risk assessments considering data residency, cross-border transfers, and contractual obligations. 55 % of businesses experience increased cyber threats after adopting AI. Clarifai helps with compliance through granular roles, permissions, and on-premise options, enabling safe deployment while reducing legal risks.

Expert Opinions

  • Transparency challenge: Stakeholders demand greater transparency and clarity.
  • Fairness and bias: Evaluate fairness and bias within the model lifecycle, using tools like Clarifai’s Data Labeler.
  • Regulatory horizon: Stay updated on emerging laws (e.g., EU AI Act, US Executive Orders) and adapt infrastructure accordingly.

Final Thoughts and Suggestions

AI infrastructure is evolving rapidly as demand and technology progress. The market is shifting from generic cloud platforms to specialized providers, custom chips, and agent-based orchestration. Environmental concerns are pushing companies toward energy-efficient designs and renewable integration. When evaluating vendors, organizations must look beyond performance to consider cost transparency, security, governance, and environmental impact.

Actionable Recommendations

  • Choose hardware and cloud services tailored to your workload (training, inference, deployment). Use dedicated chips (like Trainium or Sohu) for high-volume inference; reserve GPUs for large training jobs.
  • Plan capacity ahead: The demand for GPUs often exceeds supply. Reserve resources or partner with providers who can guarantee availability.
  • Optimize sustainability: Use model-efficient techniques, schedule jobs during renewable peaks, and choose providers with transparent carbon reporting.
  • Prioritize governance: Ensure providers meet compliance standards and offer robust security. Include fairness and bias monitoring from the start.
  • Leverage Clarifai: Clarifai’s platform manages datasets, annotations, model deployment, and orchestration. Local runners allow on-prem training and seamless scaling to the cloud, balancing performance, cost, and data sovereignty.

FAQs

Q1: How do AI infrastructure and IT infrastructure differ?
A: AI infrastructure uses specialized accelerators, DataOps pipelines, observability tools, and orchestration frameworks for training and deploying ML models, whereas traditional IT infrastructure handles generic compute, storage, and networking.

Q2: Which cloud service is best for AI workloads?
A: It depends on the needs. AWS offers the most custom chips and managed services; Google Cloud excels with high-performance TPUs; Azure integrates seamlessly with business tools. For GPU-heavy workloads, specialized clouds like CoreWeave and Lambda Labs may provide better value. Compare compute options, pricing transparency, and ecosystem support.

Q3: How can I make my AI deployment more sustainable?
A: Use energy-efficient hardware, schedule jobs during periods of low demand, employ Mixture-of-Experts or state-space models, partner with providers investing in renewable energy, and report carbon metrics. Running inference at the edge or using Clarifai’s local runners reduces data center usage.

Q4: What should I look for in start-up AI clouds?
A: Seek transparent pricing, access to the latest GPUs, compliance certifications, and reliable customer support. Understand their approach to demand spikes, whether they offer reserved instances, and evaluate their financial stability and growth plans.

Q5: How does Clarifai integrate with AI infrastructure?
A: Clarifai provides a unified platform for dataset management, annotation, model training, and inference deployment. Its compute orchestrator connects to multiple cloud providers or on-prem servers, while local runners enable training and inference in controlled environments, balancing speed, cost, and compliance.

 



How Onboarding Teams of AI Agents Drives Productivity and Revenue for Businesses


AI is no longer solely a back-office tool. It’s a strategic partner that can augment decision-making across every line of business.

Whether users aim to reduce operational overhead or personalize customer experiences at scale, custom AI agents are key.

As AI agents are adopted across enterprises, managing their deployment will require a deliberate strategy. The first steps are architecting the enterprise AI infrastructure to optimize for fast, cost-efficient inference and creating a data pipeline that keeps agents continuously fed with timely, contextual information.

Alongside human and hardware resourcing, onboarding AI agents will become a core strategic function for businesses as leaders orchestrate digital talent across the organization.

Here’s how to onboard teams of AI agents:

1. Choose the Right AI Agent for the Task

Just as human employees are hired for specific roles, AI agents must be selected and trained based on the task they’re meant to perform. Enterprises now have access to a variety of AI models — including for language, vision, speech and reasoning — each with unique strengths.

For that reason, proper model selection is critical to achieving business outcomes:

  • Choose a reasoning agent to solve complex problems that require puzzling through answers.
  • Use a code-generation copilot to assist developers with writing, changing and merging code.
  • Deploy a video analytics AI agent for analyzing site inspections or product defects.
  • Onboard a customer service AI assistant that’s grounded in a specific knowledge base — rather than a generic foundation model.

Model selection affects agent performance, costs, security and business alignment. The right model enables the agent to accurately address business challenges, align with compliance requirements and safeguard sensitive data. Choosing an unsuitable model can lead to overconsumption of computing resources, higher operational costs and inaccurate predictions that negatively impact agent decision-making.

With software like NVIDIA NIM and NeMo microservices, developers can swap in different models and connect tools based on their needs. The result: task-specific agents fine-tuned to meet a business’ goals, data strategy and compliance requirements.

2. Upskill AI Agents by Connecting Them to Data

Onboarding AI agents requires building a strong data strategy.

AI agents work best with a consistent stream of data that’s specific to the task and the business they’re operating within.

Institutional knowledge — the accumulated wisdom and experience within an organization — is a crucial asset that can often be lost when employees leave or retire. AI agents can play a pivotal role in capturing and preserving this knowledge for employees to use.

  • Connecting AI to data sources: To function at their best, AI agents must interpret a variety of data types, from structured databases to unstructured formats such as PDFs, images and videos. Such connection enables the agents to generate tailored, context-aware responses that go beyond the capabilities of a standalone foundation model, delivering more precise and valuable outcomes.
  • AI as a knowledge repository: AI agents benefit from systems that capture, process and reuse data. A data flywheel continuously collects, processes and uses information to iteratively improve the underlying system. AI systems benefit from this flywheel, recording interactions, decisions and problem-solving approaches to self-optimize their model performance and efficiency. For example, integrating AI into customer service operations allows the system to learn from every conversation, capturing valuable feedback and questions. This data is then used to refine responses and maintain a comprehensive repository of institutional knowledge.

NVIDIA NeMo supports the development of powerful data flywheels, providing the tools for continuously curating, refining and evaluating data and models. This enables AI agents to improve accuracy and optimize performance through ongoing adaptation and learning.

3. Onboard AI Agents Into Lines of Business

Once enterprises create the cloud-based, on-premises or hybrid AI infrastructure to support AI agents and refine the data strategy to feed those agents timely and contextual information, the next step is to systematically deploy AI agents across business units, moving from pilot to scale.

According to a recent IDC survey of 125 chief information officers, the top three areas that enterprises are looking to integrate agentic AI are IT processes, business operations and customer service.

In each area, AI agents help enhance the productivity of existing employees, such as by automating the ticketing process for IT engineers or giving employees easy access to data to help serve customers.

AI agents in the enterprise could also be onboarded for:

Infographic illustrating four ways AI agents can be used to improve business workflows. Collaboration: automatically provide data and information across groups of people. Content management: automate workflows, capture and analyze metrics, and create content. Customer resource management: analyze outcomes for workflows such as lead qualification, customer outreach or contact center management. Enterprise resource planning: automate financial transactions, or manage supply levels and ordering.

For telecom operations, Amdocs builds verticalized AI agents using its amAIz platform to handle complex, multistep customer journeys — spanning sales, billing and care — and advance autonomous networks from optimized planning to efficient deployment. This helps ensure performance of the networks and the services they support.

NVIDIA has partnered with various enterprises, such as enterprise software company ServiceNow, and global systems integrators, like Accenture and Deloitte, to build and deploy AI agents for maximum business impact across use cases and lines of business.

4. Provide Guardrails and Governance for AI Agents

Just like employees need clear guidelines to stay on track, AI models require well-defined guardrails to ensure they provide reliable, accurate outputs and operate within ethical boundaries.

  • Topical guardrails: Topical guardrails prevent the AI from veering off into areas where they aren’t equipped to provide accurate answers. For instance, a customer service AI assistant should focus on resolving customer queries and not drift into unrelated topics such as upsells and offerings.
  • Content safety guardrails: Content safety guardrails moderate human-LLM interactions by classifying prompts and responses as safe or unsafe and tagging violations by category when unsafe. These guardrails filter out unwanted language and make sure references are made only to reliable sources, so the AI’s output is trustworthy.
  • Jailbreak guardrails: With a growing number of agents having access to sensitive information, the agents could become vulnerable to data breaches over time. Jailbreak guardrails are designed to help with adversarial threats as well as detect and block jailbreak and prompt injection attempts targeting LLMs. These help ensure safer AI interactions by identifying malicious prompt manipulations in real time.

NVIDIA NeMo Guardrails empower enterprises to set and enforce domain-specific guidelines by providing a flexible, programmable framework that keeps AI agents aligned with organizational policies, helping ensure they consistently operate within approved topics, maintain safety standards and comply with security requirements with the least latency added at inference.

Get Started Onboarding AI Agents

The best AI agents are not one-size-fits-all. They’re custom-trained, purpose-built and continuously learning.

Business leaders can start their AI agent onboarding process by asking:

  • What business outcomes do we want AI to drive?
  • What knowledge and tools does the AI need access to?
  • Who are the human collaborators or overseers?

In the near future, every line of business will have dedicated AI agents — trained on its data, tuned to its goals and aligned with its compliance needs. The organizations that invest in thoughtful onboarding, secure data strategies and continuous learning are poised to lead the next phase of enterprise transformation.

Watch this on-demand webinar to learn how to create an automated data flywheel that continuously collects feedback to onboard, fine-tune and scale AI agents across enterprises.

Stay up to date on agentic AI, NVIDIA Nemotron and more by subscribing to NVIDIA AI news, joining the community and following NVIDIA AI on LinkedIn, Instagram, X and FacebookExplore the self-paced video tutorials and livestreams.



visual studio – How to Migrate from Microsoft HTML Help Workshop 1.4 (.chm) to new Help Viewer (.mshc) format, directly or indirectly


I’m working on a Windows Application that has been around more than 20 years. It has a help file (.chm) that is built manually outside of the solution (i.e. not by Visual Studio or MSBuild) by compiling it in Microsoft HTML Help Workshop 1.4 from 1999. I would like to migrate this to the latest format such that it can be built as part of the release mode build process.

Is there a way to import the project (based on .hhp file) into a tool that builds the modern .mhsc-format help files? According to this Wikipedia article there was something called Microsoft Help 2 which was “the help engine used in Microsoft Visual Studio 2002/2003/2005/2008”, which came after HTML Help Workshop and was succeeded by Microsoft Help Viewer, which was supported starting with Visual Studio 2010.

Preferably, there exits a way to migrate the .hhp/.chm project directly to Help Viewer, or maybe I have to download VS2010 and do it in two steps if indeed those two migrations are themselves supported. This stuff is so old it’s hard to find relevant information on it. Thanks.