Roadmap for AI in Visual Studio (November)


Today, we’re excited to share our public roadmap, which outlines the next steps in evolving Visual Studio with AI-powered agentic experiences. With every month, we aim to deliver smarter, faster, and more intuitive tools that enhance your coding experience.

Disclaimer: The items outlined here represent ongoing work for the month. They are not commitments or guarantees for delivery within the current month. Upvote the features you or your organization care about most, so we know what to prioritize. With that said, here is what we are working on!

New Agents

We’re streamlining how you find and switch between modes and making sure both built-in and extension-provided agents can handle more complex workflows. New agents are in progress:

We are even working on supporting multiple agents at once

Agent Mode/Chat:                             

We’ve been listening to your feedback on Agent Mode and Chat, and we’re making some big improvements.

Tool call improvements

 

Planning lead development

Model Context Protocol (MCP)

We want you to bring your entire development stack into Visual Studio, backed by the same security, governance, and trust you already expect from our product. This sprint, we’re focused on reaching MCP full spec, improving UX and enhancing your governance controls.

Models

We’re committed to giving you access to the latest models, and in Visual Studio we carefully evaluate them to make sure you get the best possible experience. We are continuing to expand even further.

 

To make Visual Studio a truly AI-integrated IDE, we want to ensure that Copilot is seamlessly available at every step of your development workflow—not just for writing code, but also for searching, fixing errors, writing unit tests, and even committing and pushing your changes.

We’re excited for you to try these new experiences soon. If you have feedback, post it in the developer community ticket linked above. For other ideas or suggestions, drop a comment below or create a new ticket—our team reviews them all.

Thanks 😊

A Unified Experience for all Coding Agents


November 5, 2025 by Burke Holland, @burkeholland

OpenAI had a big year: they shipped the GPT-5 and GPT-5 Codex models, which were available in VS Code on day one through the standard model picker. But they also launched Codex, their coding agent, available as both a CLI tool and a VS Code extension. And it was a huge hit with developers.

If we had to pick one word to describe the past year, it would probably be “Agent”.

Agents took over VS Code in 2025. We released agent mode for VS Code, integration for the Copilot coding agent (cloud), and the new GitHub Copilot CLI. But Copilot is not the only agent game in town. There are now more coding agents than ever – including options from OpenAI and Anthropic.

With all these choices, things got better for developers but the agent ecosystem got a little more fragmented. Subscription hopping, tool juggling, and the constant FOMO on the latest agent trend is now the norm. This year at GitHub Universe, we set out to fix that with a unified agent experience in VS Code. The first big step towards making that a reality was offering more agents in your Copilot subscription. And not just those with “Copilot” in their name.

OpenAI Codex Integration

OpenAI had a big year: they shipped the GPT-5 and GPT-5 Codex models, which were available in VS Code on day one through the standard model picker. But they also launched Codex – their coding agent, available as both a CLI tool and a VS Code extension. And it was a huge hit with developers.

At GitHub Universe, we announced you can now use OpenAI Codex with your GitHub Copilot Pro+ subscription. No additional subscription required.

To use this integration, install the OpenAI Codex extension and sign in with GitHub Copilot.

OpenAI Codex sign-in panel in VS Code

When you use Codex with Copilot Pro+, Copilot handles all model calls and standard rate limits apply. You get code generation, code explanation, and all the features – no need to manage a separate OpenAI account.

With the addition of Codex, you now have four powerful coding agents in VS Code:

  • GitHub Copilot
  • Copilot coding agent (cloud)
  • GitHub Copilot CLI
  • OpenAI Codex

But with all these agents, it’s easy to get overwhelmed. What agents are running? Where are they running? What day is it?

That’s why we’ve introduced a new feature in VS Code for orchestrating all your agents – local or remote. We call it, “Agent Sessions”.

Agent Sessions

There’s a new view in the VS Code side bar called “Agent Sessions“. It gives you one place to manage all your agents, whether they’re running locally or in the cloud.

VS Code window with Agent Sessions sidebar showing Copilot, Coding Agent, CLI, and Codex statuses against a calm gray workspace

With Agent Sessions, you see all agent sessions for your project. You can check which agents are running, their status, and jump between sessions with a click.

All agents now have a new tabbed experience called “chat editors”. You can open the Copilot coding agent in a chat editor to watch its progress. You can even course-correct the agent mid-run. It’s common to send a prompt and realize you forgot something important. Before, you had to wait or cancel. Now, just open the tab, add an update, and watch the agent adjust its plan.

You can also delegate any task to any agent right from the Chat view.

VS Code showing the "Delegate" button from the chat, when clicked opens a menu of agents to delegate to

This unified Agent Sessions view makes VS Code a “mission control” for orchestrating all your agents, while keeping you in the editor where you do your best work. We’re excited to welcome OpenAI Codex today, and we’re working to bring more agents to your Copilot+ subscription in the future.

Planning Agent

A few months ago we introduced the concept of chat modes in VS Code. These are custom modes that let you augment or alter the behavior of the built-in agent prompt. When you use a chat mode to alter the agent behavior in VS Code, what you’re really doing is creating your own custom agent. So we’ve renamed “chat modes” to just “agents” to better reflect what they actually are.

To get you started building custom agents, we’ve added a new built-in agent called “Plan“.

Copilot chat in VS Code with Plan agent dropdown highlighted, planning guidance beside dark theme editor, label reads Plan for a focused tone.

The new Plan agent helps create a detailed plan from lazy prompts like “add drag and drop”. That’s an actual prompt I sent yesterday. No mention of what to add it to, what page, or whether to use a library. I do this a lot, and I bet I’m not alone.

With the Plan agent, Copilot asks the questions that need answers. It even recommends libraries for drag and drop and gives reasons to pick one over another.

Plan agent breaking down drag-and-drop into steps recommending React Beautiful DnD and React DnD with comparisons.

You can answer these with quick replies on separate lines so it knows which answer goes to which question. Here’s how I’d answer:

dnd-kit
yes - what kind of a question is this in 2025
link creation only

Pro tip: Change the “workbench.action.chat.submit” keybinding to “Ctrl + Enter” so you stop accidentally sending messages when you just want a new line. Your swear jar will thank you.

When the Plan agent has enough info, it stops asking questions and asks if you’re ready to proceed. You can use the new “Handoff” feature in chat to either proceed or open the full plan in the editor.

Screenshot showing the Handoff feature in Copilot chat with options to proceed with implementation or open the plan in the editor.

Try different models to see which you like best for planning. We’ve found the Claude models are great at identifying missing context and edge cases, and asking the right questions.

If you’re like me, you’ll want to know how the Plan agent works so you can up your prompt engineering game. You can read the Plan prompt by choosing “Configure Agents” from the Command Palette and selecting Plan. It’s a great baseline for creating your own custom agents. I used it to create one called “Research” that recursively does internet research and writes up its findings.

These custom agents are also available when you delegate to other agents such as the Copilot CLI and the Copilot coding agent. Your custom agents work everywhere that you need them to.

Pro tip: You can find hundreds of custom instructions, prompt files and agents over on the awesome-copilot repo. If you haven’t checked that out yet, you’re missing out. It’s a treasure trove of inspiration and ready-made prompts.

Subagents

Context Confusion is a real problem with agents. The more you interact, the more context they track – and the more likely they are to get confused. There’s a whole new discipline for managing context called “Context Engineering”.

With the latest VS Code release, we’ve added a tool called “runSubagent” to help you manage context.

Subagents run independently from the main chat and have their own context. You can call one by adding the #runSubagent tool to your prompt. The LLM creates a prompt, hands it off to a subagent, and that agent only gets the context you send. It knows nothing about the rest of your chat, and your chat knows nothing about the subagent’s context. Subagents don’t pause for feedback and have access to most of the same tools as the main chat.

When a subagent finishes, it returns the final result to the main chat – and only that result joins the main context. Subagents keep your main chat lean while letting you go on sidebars and deep dives. For example, if you’re building an API and need to research authentication, spin up a subagent to do that.

Analyze the #file:api with #runSubagent and recommend the best authentication strategy for a web client consuming these endpoints.

You’ll know a subagent is running because you can see tool calls and model responses below the subagent action. In the screenshot below, that’s “Analyze app structure for auth”.

A subagent process running in VS Code with tool calls underneath the main agent action

We’re still exploring ways to help you manage context with agents, and subagents are just the beginning.

Looking Ahead

Agents are changing how we write code and how we work. You shouldn’t have to pick just one. You should be able to move between agents, keep fine-grained control over your context, and create your own custom agents to extend the various built-in agent prompts. With the unified agent experience in VS Code, you can now do all of that.

These are just a few highlights from this year’s GitHub Universe. Check out GitHub’s blog for all the updates as we work on a unified workflow for a multi-agent experience everywhere you need it.

I’ll leave you with this: it was only 12 months ago that we announced “Copilot Edits” and Claude support in Copilot. At this pace, imagine where we’ll be 12 months from now.

And as always, Happy Coding! 💙



visual studio – MessageBox Hides the Parent Window Under Other Open Windows When Closing in WPF/Winform Application


Ok, so I think I’ve figured out a solution / workaround

First of, when does this behavior occur?

As far as I can tell, this happens when the following conditions are met:

  • The Child Window opens a MessageBox in the Window.Closing event handler
  • The Parent Window sets itself as the Owner of the Child Window

And you do this before closing the child window:

  1. Focus on any program that is behind your program, so that it is now in front of your app
  2. Focus back to your program

Example:

public partial class MainWindow : Window
{
    public MainWindow()
    {
        InitializeComponent();
        Closing += MainWindow_Closing;
    }

    private void MainWindow_Closing(object? sender, System.ComponentModel.CancelEventArgs e)
    {
        // Show MessageBox in Window.Closing event handler
        if (MessageBox.Show("Are you sure you want to close?", "?", MessageBoxButton.YesNo) != MessageBoxResult.Yes)
            e.Cancel = true;
    }

    private void OpenWindow_Click(object sender, RoutedEventArgs e)
    {
        var subwindow = new MainWindow();
        subwindow.Owner = this;    // <-- Set child owner
        subwindow.Show();
    }
}

(In this example the parent window and the child window are the same class, but it works the same when they are different classes)

Solution

The solution that works for me is to just focus on the Owner in the Window.Closing event handler:

private void MainWindow_Closing(object? sender, System.ComponentModel.CancelEventArgs e)
{
    if (MessageBox.Show("Are you sure you want to close?", "?", MessageBoxButton.YesNo) != MessageBoxResult.Yes)
        e.Cancel = true;
    else if(Owner != null)
        // Focus on the owner if it is set and the closing was not cancelled
        Owner.Focus();
}

c++ – Open and run multiple cpp files simultaneously in Visual Studio


I’m using C++ in Visual Studio, and I need to be able to view multiple C++ files. I also need to be able to run those previous files as well as my current file.

I have tried Visual Studio Community 2022 as well as Insiders 2026, but I cannot find a way to reliably do it. I tried the CMakeLists.txt method, but then I have to define each new script within the txt and I cannot do that every time I need to run a program. I tried using the open folder option, but it asks for a directory file, and I cannot figure out how to do that and it seems like it would run into the same issue as CMake. I tried adding multiple programs to the same solution, but it just won’t let me only select the one I truly want to run, and going into the properties and turning of inclusions for each program again takes too long.

The main issue seems to be that I need multiple programs to have the main() designation, and Visual Studio doesn’t know what to do at that point.

Visual Studio Code Coverage Reports Not Listing Or Highlighting All Tested Code


using Visual Studio 17.14.18

I’m asking Visual Studio to give me a code coverage report, which it does. But there are lots of methods being used by the completed / passing tests that are not included in the report or highlighted as passing.

It’s odd because these overlooked methods are in the same classes as other methods that are also getting called but are getting picked up the code coverage reports.

In this case the Read method is getting highlights and inclusion in the code coverage report. But the two write methods are not. The only thing that appears to be different is that two write methods are returning Tuples and the Read isn’t.

public void LogEvent()
{

    XX.Data.EventLogs.Interactions.Events.Purge();

    DateTime EventDTG = DateTime.Now;
    Guid EventGUID = Guid.NewGuid();
    Guid TransactionGUID = Guid.NewGuid();
    string Program = "SYSTEM";
    int EventID = 999999;
    String Subject = "Event Subject";
    String Description = "Event Description";
    String Notes = "Event Notes";

    var t = XX.Universalis.Logging.Write(EventDTG, EventGUID, TransactionGUID, Program, EventID, Subject, Description, Notes);

    Assert.IsTrue(t.Result == true);
    Assert.IsTrue(t.IDNumber > 0);


   XX.Data.EventLogs.Models.Events TheEvent = XX.Universalis.Logging.Read(t.IDNumber);

    // Can't figure out how to test if the date is correct.
    // Assert.IsTrue(DateTime.TryParse(TheEvent.DTGCreated, out _));
    Assert.IsTrue(TheEvent.EventID == 999999);
    Assert.IsTrue(TheEvent.Program == "SYSTEM");
    Assert.IsTrue(TheEvent.Subject == "Event Subject");
    Assert.IsTrue(TheEvent.Description == "Event Description");
    Assert.IsTrue(TheEvent.Notes == "Event Notes");
    Assert.IsTrue(Guid.TryParse(TheEvent.TransactionGUID.ToString(), out _));
    Assert.IsTrue(Guid.TryParse(TheEvent.EventGUID.ToString(), out _));


}

namespace XX.Universalis
{

    public static class Logging
    {


        public static (bool Result, int IDNumber) Write(DateTime DTGCreated, Guid EventGUID, Guid TransactionGUID, string Program, int EventID, string Title, string Description, string Notes)

        {

            var t = XX.Data.EventLogs.Interactions.Events.Write(DTGCreated, EventGUID, TransactionGUID, Program, EventID, Title, Description, Notes);
            return (t.Result, t.IDNumber);

        }


        public static (bool Result, int IDNumber) Write(DateTime DTGCreated, Guid EventGUID, Guid TransactionGUID, string Program, int EventID, string Subject, string Description, string Notes, Exception ex)

        {

            var t = XX.Data.EventLogs.Interactions.Events.Write(DTGCreated, EventGUID, TransactionGUID, Program, EventID, Subject, Description, Notes, ex);
            return (t.Result, t.IDNumber);
        }


        public static XX.Data.EventLogs.Models.Events Read(int IDNumber)

        {

            var context = new XX.Data.EventLogs.EFContext.EventLogsContext();
            return context.Events.Where(b => b.IDNumber == IDNumber).ToList().FirstOrDefault();




        }

Here are my TestSettings

<?xml version="1.0" encoding="utf-8"?>
<!-- File name extension must be .runsettings -->
<RunSettings>
  <DataCollectionRunSettings>
    <DataCollectors>
      <DataCollector friendlyName="Code Coverage" uri="datacollector://Microsoft/CodeCoverage/2.0" assemblyQualifiedName="Microsoft.VisualStudio.Coverage.DynamicCoverageDataCollector, Microsoft.VisualStudio.TraceCollector, Version=11.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a">
        <Configuration>
          <CodeCoverage>
            <!-- Match fully qualified names of functions: -->
            <!-- (Use "\." to delimit namespaces in C# or Visual Basic, "::" in C++.)  -->
            
            
            <Functions>
              <Exclude>

                <!-- Testing-->
                <Function>Microsoft\..*</Function>
                <Function>System\..*</Function>
                <Function>TestFx\..*</Function>
                <Function>System\..*</Function>
                
                <Function>XX.UnitTests\..*</Function>
                
                <!-- Data Models and EFContext-->
                <Function>XX.Data.ClientManagement.EFContext\..*</Function>
                <Function>XX.Data.ClientManagement.Models\..*</Function>

                <Function>XX.Data.Connie.EFContext\..*</Function>
                <Function>XX.Data.Connie.Models\..*</Function>

                <Function>XX.Data.EventLogs.EFContext\..*</Function>
                <Function>XX.Data.EventLogs.Models\..*</Function>

                <Function>XX.Data.Medici.EFContext\..*</Function>
                <Function>XX.Data.Medici.Models\..*</Function>

                <Function>XX.Data.Sales.EFContext\..*</Function>
                <Function>XX.Data.Sales.Models\..*</Function>

              </Exclude>
            </Functions>
          </CodeCoverage>
        </Configuration>
      </DataCollector>
    </DataCollectors>
  </DataCollectionRunSettings>
</RunSettings>

c++ – Visual studio vs gcc


Myself, I only use the free Visual Studio edition, not the “full” one. Mostly interested in the C++ compiler and the debugger.

My CMake setup supports both Windows and Linux builds — which is a bit tricky at times, but since the initial setup work it’s mostly about adding new sources, which is much more convenient now compared to having two distinct build systems (initially, I used cmake for Linux and a VisualStudio-solution for Windows).

Along the same lines, I want to look into getting vcpkg for the Linux build as well, instead of having a “ReadMe” about which packages to install on the system. It would also solve the problem of Debian always having packages that are somewhere between outdated and very outdated 🙂

I use g++ to compile for Linux, and VisualC++ to compile for Windows. VC++ has the advantage that it is the “native” compiler for Windows; it can directly work with the Microsoft Windows SDK and produces all the debug-info that the VisualStudio debugger needs. Also, the FSF hates Windows — so their so-called ports are pretty rubbish.

In addition, VC++ is completely unrelated to g++ — sometimes, one of them complains about a problem that the other did not notice. While I still can’t guarantee that the portable part of the code is 100% standards-compliant, having 2 compilers accept it is better than just 1 (Linux- and Windows-specific sources are only fed through one compiler, obviously).

It also happens that an error message that’s completely mystifying on one compiler turns into a simple “oh, that’s what it’s complaining about… why didn’t you just say so” on the other.

I might even be adding clang++ as a third option, but I’m not sure how well this can coexist with g++ on Linux, and it’s also trying to mimick g++ shenanigans so it’s likely less useful for catching portability issues.

In general, even if this weren’t meant to be portable, I generally prefer the native solutions if at all possible. g++ on Windows feels like a bad idea to me.
Of course, this would change if Microsoft killed the free edition, unless Windows-g++ turned out to be unbearable.

visual studio – How to compile 64-bit binaries for latest versions of OpenSSL (3.5.x) on the Windows 10


I am trying to compile latest version of the OpenSSL on Windows 10 machine (version 3.5.4).

Prerequisites are:

  1. I ran x64 Native Tools Command Prompt for Visual Studio 2022 Community Edition

  2. I downloaded latest version of zlib library (version 1.3.1) and compiled it successfully

  3. I downloaded latest version of OpenSSL (version 3.5.4) and run the folowing command to configure build environment:

    perl Configure no-rc5 no-idea enable-mdc2 enable-zlib VC-WIN32 -I..\zlib /LIBPATH:..\zlib

    It completes without errors

  4. Run nmake. Compilation completes successfully. But at the end of the process I receive following error when build process tries to link compiled .obj files into final libcrypto-3.dll:

...
cmd /C ""link" /nologo /debug /machine:x64 /LIBPATH:..\zlib /dll /nologo /debug  \
/machine:x64 /LIBPATH:..\zlib @C:\Users\XXXX\AppData\Local\Temp\nmDAB3.tmp \
/implib:libcrypto.lib || (DEL /Q libcrypto-3.* libcrypto.lib & EXIT 1)"

crypto\aes\libcrypto-shlib-aes-586.obj : fatal error LNK1112: module machine type 'x86' conflicts with target machine type 'x64'

Analysis

It looks to me that issue is that linker is trying to mix 32-bit and 64-bit binaries which obviously fails. So I used dumpbin /headers command and checked gzlib.obj and bitness of this object file is 64-bit.Then I checked other object files from the OpenSSL compilation land and see that their bitness is also 64-bit. Specifically, C files compiled by CC compiler (CL.EXE pointing to proper MS compiler capable to produce 64 bit binary) are compiled correctly into 64-bit bitness. The problem is caused by object files produced by NASM, reported libcrypto-shlib-aes-586.obj is indeed 32-bit.

Attempt

I tried to force NASM to produce 64-bit binary by passing -f win64 switch instead but this fails misserably with a lot of errors like:

        "nasm"  -f win64  -o crypto\aes\libcrypto-shlib-aes-586.obj crypto\aes\libcrypto-shlib-aes-586.obj.asm
crypto\aes\libcrypto-shlib-aes-586.obj.asm:972: error: instruction not supported in 64-bit mode
crypto\aes\libcrypto-shlib-aes-586.obj.asm:973: error: instruction not supported in 64-bit mode
crypto\aes\libcrypto-shlib-aes-586.obj.asm:974: error: instruction not supported in 64-bit mode
...

Looking into aes-586.S it seems to me, that this is really 32-bit code, using instructions like:

    push    ebp
    push    ebx
    push    esi
    push    edi

and setting bitness on section code use32 class=code align=64. I dig a bit deeper and I can see that all those assembly files are really generated by set of PERL scripts. There is PERLASM_SCHEME flag in which control dialect of the assembler but not really bitness, PERL scripts that generate assembly files are 32-bit. Which makes sense also based on their names, referring to 586 processor family.

but there is PERL file named /crypto/eas/asm/aesni-x86_64.pl suggesting there is 64-bit implementation, I just have not found any logic on Configure and generated makefile scripts which would drive its selection instead of picking up /crypto/eas/asm/aes-586.pl.

Which leads me to surprising conclusion – at least on Windows platform, it is not “possible” to build OpenSSL on 64-bit.

Well, I know, that in software everything is possible and I feel I am close so if someone can please point me to the correct direction how to resolve it, it would be great.

c# – How to solve Windows .NET MAUI app publish errors?


I have a .NET 9 MAUI Windows application.

I’m trying to publish it using the following command :

dotnet publish --framework net9.0-windows10.0.19041.0 --configuration Debug 
       --self-contained true --output ./publish /p:WindowsPackageType=MSIX /p:GenerateAppxPackageOnBuild=true 
       --runtime win-x64 -p:UseMonoRuntime=false

The command fails with this error :

failed with 3 error(s) and 1 warning(s) (1.1s)
C:\Users\user.nuget\packages\microsoft.windowsappsdk\1.7.250606001\buildTransitive\Microsoft.Build.Msix.Packaging.targets(363,5): warning : Path to mspdbcmf.exe could not be found. A symbols package will not be generated. Review https://aka.ms/windowsappsdkdocs and ensure that all prerequisites for Windows App SDK development have been installed.
MakeAppx : error : You must include a valid app package manifest file named AppxManifest.xml in the source.
MakeAppx : error : Package creation failed.
MakeAppx : error : 0x80080203 – The specified package format is not valid: The file is not a valid app package because it is missing a required footprint file.

The file AppxManifest.xml exists and it’s generated in the folder /obj/Debug/net9.0-windows10.0.19041.0/win-x64/MsixContent.

I have tried almost anything I could find in the internet including deleting Bin & Obj folders.

The project compiles, builds and runs successfully from Visual Studio.

However, the commands continues to fail.

Does anyone know how to solve that, please?

Join us at .NET Conf: Dive into the future of development with Visual Studio 2026


We’re thrilled to invite you to one of the most exciting events in the .NET ecosystem: .NET Conf. It runs from November 11th through the 13th and you’re invited!

This annual virtual conference is a must-attend for developers, architects, and enthusiasts looking to level up their skills and stay ahead of the curve in .NET and Visual Studio development.

dotnetconf image

.NET Conf brings together experts from Microsoft and the broader community to share insights, best practices, and the latest innovations. Whether you’re building web apps, mobile solutions, cloud services, or anything in between, there’s something for everyone. Sessions cover a wide range of topics, including performance optimizations, AI integration, cross-platform development, and more.

This year, we’re especially excited about the deep dives into Visual Studio 2026. You’ll get to explore tons of new features, enhancements, and productivity tools designed to make your coding life easier and more efficient. From improved debugging capabilities to seamless integration with emerging technologies, these sessions will give you a firsthand look at how Visual Studio is evolving to meet the demands of modern development workflows.

You’ll also get to hear from a bunch of folks on the Visual Studio team, sharing cool stuff about what’s new in Visual Studio 2026.

Nik Karpinsky is showing how the new profiler agent can help you identify performance issues in your apps and fix them. This revolutionary feature will help you speed up your app in no time.

Mika Dumont explains how new technology makes upgrading apps to .NET 10 easier than ever. She also describes how it enhances the use of Azure cloud features. If you maintain older solutions, you don’t want to miss this.

Harshada Hole takes you through a whirlwind of new productivity features in the Visual Studio debugger. This is your first step in becoming a debugging rock star.

Jui Hanamshet and Oscar Obeso demos the latest innovations in Copilot for Visual Studio and how you can benefit from having AI by your side.

.NET Conf is free and virtual, so you can join from anywhere. It’s the perfect opportunity to get inspired, learn new tricks, and prepare for what’s next in .NET and Visual Studio.

.NET Conf kicks off soon! Head over to dotnetconf.com and click Add to calendar to save your spot. Don’t miss the Visual Studio 2026 sessions that will help you work smarter and build faster. We can’t wait to see you there.

The Visual Studio Team

How to see a Winforms custom control in Visual Studio designer?


I am following this tutorial to create a custom control that inherits from one different than UserControl.

When opening the designer view of this control, instead of seeing a render of the control, I only see a black background with the message “To add components to your class, drag them from the Toolbox and use the Properties window to set their properties. To create methods and events for your class, switch to code view.”.

Visual Studio designer tab

When adding the CustomControl to a form, the form’s designer renders the CustomControl as expected.

The custom control being rendered inside the form’s visual designer in Visual Studio

This is the code of my CustomControl:

public partial class CustomControl1 : Button
{
    private int _counter = 0;

    public CustomControl1()
    {
        InitializeComponent();
    }

    protected override void OnPaint(PaintEventArgs pe)
    {
        // Draw the control
        base.OnPaint(pe);

        // Paint our string on top of it
        pe.Graphics.DrawString($"Clicked {_counter} times", Font, Brushes.Purple, new PointF(3, 3));
    }

    protected override void OnClick(EventArgs e)
    {
        // Increase the counter and redraw the control
        _counter++;
        Invalidate();

        // Call the base method to invoke the Click event
        base.OnClick(e);
    }
}

partial class CustomControl1
{
    /// <summary>
    /// Required designer variable.
    /// </summary>
    private System.ComponentModel.IContainer components = null;

    /// <summary>
    /// Clean up any resources being used.
    /// </summary>
    /// <param name="disposing">true if managed resources should be disposed; otherwise, false.</param>
    protected override void Dispose(bool disposing)
    {
        if (disposing && (components != null))
        {
            components.Dispose();
        }

        base.Dispose(disposing);
    }

    #region Component Designer generated code

    /// <summary>
    /// Required method for Designer support - do not modify 
    /// the contents of this method with the code editor.
    /// </summary>
    private void InitializeComponent()
    {
        SuspendLayout();
        ResumeLayout(false);
    }

    #endregion
}

Is this a limitation of Visual Studio designer?

Do I need to change something in order to use see the control in the designer?