Building Long-Distance Next Edit Suggestions


February 26, 2026 by Vikram Duvvur, Gaurav Mittal, Benjamin Simmonds

Last February, we released next edit suggestions (NES) in GitHub Copilot. NES extends ghost text by not just inserting code at your cursor, but suggesting edits nearby, anticipating what you’d change next. This was a powerful step forward, but it only worked within a small window around your cursor. In real editing workflows, the next change you need to make is often several screens away.

That’s what we set out to solve with long-distance next edit suggestions: extending NES to predict and suggest edits anywhere in your file, not just near your current cursor position.

A far away NES edit

From nearby edits to anywhere in the file

Think about a typical refactoring session. You rename a function and all function invocations elsewhere in the file also need updating. Or you change a parameter type, which now makes the validation logic 200 lines down incorrect. These are exactly the moments where you would expect NES to help you, but unfortunately, the next meaningful edit is far outside its effective window.

This creates a hard modeling problem. The search space explodes from a handful of nearby lines to every line in the file. And the cost of getting it wrong isn’t evenly split: a correct jump saves you real effort, but an unnecessary one interrupts your flow and makes you less likely to trust the next suggestion. The system must learn not only where to move, but also when not to move.

Rather than modifying the existing edit-generation model, we decided to use a multi-model approach. We trained a dedicated location model whose sole responsibility is to predict where the next edit should happen. Once a valid location is selected, the original NES model then generates the edit suggestion.

This separation has two benefits. First, each model can specialize on one task: one model learns spatial intent (where to jump), the other model produces high-quality edits within a local window. In addition, it enables us to iterate independently on location prediction without disrupting ongoing improvements to the core NES model.

Measuring success via an evaluation framework

Before training the location model, we needed a way to measure whether it was actually working for real-world editing scenarios.

We designed a structured three-step evaluation process:

  1. Identify common multi-edit workflows
  2. Construct representative cursor-jump examples
  3. Measure both jump and no-jump accuracy

Diagram of the three-step evaluation flow, showing the progression from real editing workflows to structured evaluation dataset to spatial intent metrics.

We started by analyzing how developers chain together edits in real-world scenarios – renaming, signature changes, documentation updates – rather than treating each edit as an isolated event. The common thread: edits ripple across multiple, non-adjacent locations in a file.

From these workflows, we built an evaluation dataset where each example includes the ground-truth next line to jump to, recent edit history, and cursor context.

Crucially, we measured both jump and no-jump accuracy. While many examples required predicting a new location, a meaningful subset required staying on the current line. A model that jumps too often can be just as disruptive as one that misses important transitions. Imagine getting a jump suggestion every time you’re halfway through typing a variable name.

By grounding evaluation in realistic workflows and measuring both jump and no-jump cases, we ensured that offline metrics reflected how developers actually edit rather than artificial scenarios.

Building the training dataset

With evaluation in place, we turned to training data. While the evaluation dataset was small enough to construct by hand, training required data at a much larger scale. We started with the same dataset we curated for training the core NES model, which contains trajectories of how developers move through and edit a file.

By replaying these trajectories, we transformed every cursor movement into a training sample. After applying filters, such as ensuring the jump location appeared in the prompt, we had our training dataset.

Training with supervised finetuning

To train the location model, we used Supervised Finetuning (SFT) with targeted hyperparameter search. Our strongest results came from a structured grid search centered around the hyperparameters of the existing NES model. By constraining the search space to values already known to perform well in a related setting, we were able to efficiently explore combinations and identify a high-performing configuration.

Before settling on this approach, we also experimented with Bayesian Optimization, a technique designed to optimize expensive black-box functions. In our case, each evaluation required training a model from scratch, making experimentation computationally costly. While theoretically appealing, this approach did not yield improvements over the more focused grid search.

Ultimately, the structured grid search produced our best-performing supervised model and provided a stable foundation for subsequent iterations.

Designing UX for distant edits

A better model isn’t enough if you never notice or trust the suggestions it produces. With standard NES, suggestions appear close to your cursor and within your immediate view, making them naturally discoverable. With long-distance NES, the most relevant edit may not be in your immediate vicinity. So, the UX has to solve a harder problem: surfacing distant edits without disrupting your flow.

Video of a far away jump suggestion, showing how the widget adapts to a gradually reducing window size.

This comes down to balancing three concerns: keeping suggestions compact, making them readable, and minimizing how much of your code they obscure.

This is more than a discoverability problem. It’s a trust problem. When the system proposes moving your cursor elsewhere, you need to quickly assess whether that jump is relevant and worth your attention. The UI must communicate enough context to evaluate the suggestion without demanding a full context switch.

Rather than rendering large diffs inline or forcing attention shifts, we designed a compact widget that appears near your cursor and prefers empty space when available. The widget adapts to the surrounding editor layout, shrinking or expanding to fit naturally into whitespace such as at the end of a line or between blocks of code.

Because the full edit may be far away and potentially large, the widget does not attempt to render the entire suggestion. Instead, it provides a lightweight preview, an excerpt from one of the affected lines, rendered with diff-style highlighting. This gives you just enough context to judge relevance and decide whether to act.

If the preview looks useful, you can choose to jump to the suggested location and review or apply the full edit there. If not, you can continue editing uninterrupted.

Validating: from dogfooding to A/B tests

We always dogfood internally before shipping new capabilities, and long-distance NES was no exception. Early feedback revealed a clear pattern: the model was too eager to jump. Even when its predictions were directionally correct, frequent suggestions became distracting. The root cause was a dataset imbalance: far fewer “no jump” examples than jump examples. The model had learned to jump confidently but hadn’t learned when to stay put.

We rebalanced the dataset by expanding samples where the correct action was to remain on the current line, such as partially typed identifiers where jumping would not make sense. After retraining, both jump and no-jump accuracy improved, and suggestions felt noticeably more intentional.

To validate at scale, we ran A/B tests comparing long-distance NES against standard NES. The results were encouraging: a 23% increase in code written via NES, along with improvements across other engagement metrics. But the experiment also surfaced a tradeoff. Far-away suggestions were rejected more often than standard NES. Some of this was expected given a new interaction pattern, but it signaled that the model still needed to be more selective about when to suggest a jump.

This wasn’t purely a modeling problem or purely a UX problem. It was both. Improving long-distance NES required tightening the model’s jump predictions while also ensuring the interface made it easy to assess and accept relevant suggestions.

Reinforcement Learning: Learning when not to jump

The validation results pointed to a clear conclusion: the supervised model needed more restraint.

To address this, we introduced a reinforcement learning stage using Reinforcement Learning with Verified Rewards (RLVR). Instead of relying solely on supervised labels, we added a grading signal based on how closely the model’s predicted jump location matched the eventual cursor movement. Predictions that aligned closely with actual editing behavior were rewarded more strongly, while unnecessary or poorly timed jumps were penalized.

This allowed the model to optimize directly for real editing conditions, without requiring new manual annotations or UX instrumentation.

The result was a better balance between initiative and restraint. The updated model improved offline metrics and translated those gains into online performance, increasing code written via NES while reducing rejection rates. With those signals in place, we began shipping the improved version the following month.

What’s next?

Looking ahead, we plan to extend this work with cross-file suggestions, enabling the model to reason beyond the current file. We’re also exploring a unified model that predicts both the location and the content of the next edit together, which could improve overall suggestion relevance.

Try It Out

Long-distance next edit suggestions are available now in VS Code for users with a GitHub Copilot subscription – just ensure you have next edit suggestions and extended NES range


github.copilot.nextEditSuggestions.extendedRange

enabled in VS Code. Give it a try the next time you’re doing refactoring work—renaming variables, updating function signatures, or making changes that ripple through your file. We’d love to hear your feedback!

Happy coding! 💙


Acknowledgements

A big shoutout to our developer community for the ongoing feedback that pushes us to deliver the best possible experiences with VS Code and GitHub Copilot. And a huge thanks to the researchers, engineers, product managers, and designers across GitHub and Microsoft who curated the training data, built the training pipeline, evaluation suites, and serving stack, and to the VS Code and GitHub Copilot teams for smooth model releases.

Making agents practical for real-world development


March 5, 2026 by VS Code Team, @code

Agents are taking on more complex and longer-running development tasks.

With the February 2026 release (1.110), we’re making those workflows more practical inside Visual Studio Code by giving you greater control over how agents behave, integrate into your tools, and retain project context across sessions.

From enforcing policies with hooks to guiding agents mid-response, validating UI features with integrated browser tools, and bringing structured skills directly into the editor, this release focuses on making agents reliable collaborators for real development work.

Give agents the right context

Codebases often have a complex architecture and project structure, and can consist of thousands of files. Agents might struggle to stay focused and find the right pieces of information, especially as sessions get longer.

In this release, we’re improving how agents handle large outputs efficiently, how they remember the most important parts of the task at hand, and giving you control over what information can be discarded.

Handle large outputs

Large diffs, generated files, or extensive logs can overwhelm a session if treated as an inline context.

Agents and LLMs are great at working with files. VS Code now manages large outputs by streaming them to temporary files and prioritizing the most relevant information for the model. This keeps agents focused on the right details while optimizing context usage without additional work.

From a UI perspective, large tool outputs in a chat conversation can make it difficult to follow the overall flow of what’s happening in a session. VS Code now puts terminal output in collapsible sections, giving you the details if you want them, while keeping the session uncluttered.

Share agent memory

Agents in Visual Studio Code use memory to retain relevant context. Agent memory now spans coding agents, CLI workflows, and code review interactions.

Rather than starting from scratch each session, agents recall your preferences, apply lessons from previous tasks, and build up knowledge about your codebase over time.

Architectural decisions, naming conventions, and prior refactors remain part of the conversation, so you spend less time restating intent and more time continuing the work.

Compact long sessions

As conversations expand, VS Code automatically compacts older history. Earlier discussions are summarized, key decisions are preserved, and space is freed up for ongoing work.

Previously, you had no control over when context compaction was happening and what information was retained after compacting. Maybe you discussed several implementation variants, and only one specific one is important to remember and build upon.

Now you can manually run context compaction for a session by typing /compact. And in doing so, you can give the agent additional instructions on what information to keep or discard.

Especially when working with long-running sessions and dealing with lots of context, controlling compaction keeps the agent focused on the key information.

Screenshot showing the context window control and the compact option.

Agent controls

As agents take on more responsibility, the way you interact with them matters just as much as what they generate. These updates make it easier to control the conversation and guide outcomes during active work.

Guide the agent while it works

Agents sometimes head down the wrong path and it’s immediately obvious, even before it has finished the request.

Previously, you had to wait for a response to complete before you could steer it in a different direction. Now, you can intervene while the agent is generating a response, guiding the direction of the work without restarting or losing context.

And if you think of some extra tasks the agent should perform, you can now queue follow-on requests for the agent to perform once it has finished its current task. If you queue up multiple requests, you can easily change the order in which they need to be performed.

For example, you might clarify:

  • Only modify this component
  • Reuse existing utilities
  • Avoid changes to backend APIs

Fewer wasted edits, shorter feedback loops, and a conversation that stays on track.

In our app, when new styling guidance is introduced to enhance the hero card with a gold accent and shimmer effect, the agent revisits the existing CSS and continues the implementation without restarting the session.

Explore alternatives without losing context

There are often different ways to solve a problem or multiple design options. You could create multiple chat sessions, one for each variant, but that would mean you need to copy over the existing context and conversation history.

To make this experience easier, you can now fork a chat session. This creates a new, independent session that inherits the conversation history from the original session. The forked session is fully separate from the original, so changes in one session do not affect the other.

You can either type /fork and it will copy over the full conversation, or you can use the fork button at a specific checkpoint to fork the conversation up until that point.

In the demo below, /fork creates a parallel thread where a more minimal design direction is explored without affecting the original discussion.

Automate with hooks

Teams frequently rely on conventions, validations, or automated checks to maintain consistency.

Hooks execute deterministically at key lifecycle events, allowing teams to enforce policies and set guardrails that keep agent-driven changes aligned with project standards, rather than relying on repeated prompts.

For example, a team might automatically lint code before edits are applied, block changes to protected configuration files, or trigger a test suite whenever an agent modifies application logic.

This keeps agent-driven changes aligned with your project’s standards without requiring constant supervision.

The following demo shows a stop hook executing on session exit, detecting uncommitted changes and automatically committing and pushing them.

Agent extensibility

Agents are most useful when they integrate naturally into the tools and workflows you already rely on. These updates introduce a richer agent experience that closes the development loop, while skills provide reusable building blocks you can invoke on demand.

Run agent skills when you need them

Many development tasks repeat across sessions.

Writing tests, refactoring code, or reviewing changes often follows patterns you already understand.

Instead of rewriting instructions each time, you can invoke agent skills directly from chat using slash commands. Skills may come from built-in capabilities, extensions, or project-specific tooling.

Instead of prompting vaguely, you can intentionally invoke workflows.

For example:

  • /tests generates validation tests
  • /explain documents unfamiliar code
  • /fix targets a specific error

By default, available skills appear in the / menu, making them discoverable and easy to reuse across sessions.

The following video demonstrates a frontend design skill driving the workflow end to end, implementing a new UI component, integrating live data, and validating the result without leaving VS Code.

Validate changes without leaving the editor

Agents are already effective at generating and running unit tests to validate non-UI code changes.

Verifying frontend behavior, however, has often relied on manual testing or manual screenshot comparisons.

With browser agent tools, agents can now open and interact with the application directly in the integrated browser inside VS Code.

This allows the agent to implement a UI change, load the running application, inspect the result, and adjust the code if something doesn’t behave as expected.

Implementation, inspection, and validation now happen within the same workflow, helping you iterate quickly without leaving the editor.

In the example below, the integrated browser opens and follows the page navigation, so you can validate changes as you interact with the application.

Development often moves between the terminal and the editor.

That’s why the Copilot CLI is now integrated in VS Code, with native support including diff tabs, trusted folder sync, and right-click to send code snippets. You can manage the connection by running /ide.

The CLI and editor stay aligned, sharing context as work progresses.

In practice:

  • A CLI process generates changes
  • VS Code surfaces them as diffs
  • You review and approve modifications directly in the editor

Screenshot of the Copilot CLI screen with VS Code auto‑connect settings and a selected workspace.

The next step for agents in VS Code

Agents are becoming a natural part of everyday development. You shouldn’t have to adapt your workflow around them. They should adapt to the way you build.

With the February 2026 release (1.110), VS Code gives you more control over how agents behave. They fit into your tools more naturally and carry context across sessions.

We’re building this in the open. If you have feedback, ideas, or run into issues, open a discussion or file an issue in the VS Code repo or find us on social. We’d love to hear from you.

Want to see how these features can enhance your developer workflow?

Join us for our VS Code release livestream on March 19 at 8 AM PST. Turn on notifications!

Happy coding! 💙

Insiders (version 1.111)


VS Code Insiders banner

Last updated: March 5, 2026

These release notes cover the Insiders build of VS Code and continue to evolve as new features are added. To try the latest updates, download Insiders.
To read these release notes online, go to code.visualstudio.com/updates.

You can still track our progress in the Commit log and our list of Closed issues.

These release notes were generated using GitHub Copilot and might contain inaccuracies.

Happy Coding!


March 5, 2026

  • VS Code now recursively searches for *.instructions.md files in subdirectories under .github/instructions/, matching the behavior of Copilot CLI and web-based GitHub Copilot agents. Previously, only files in the root .github/instructions/ directory were discovered. #295944

  • You can now copy the name of an item in the Source Control Repositories view by using the context menu option Copy Stash Name, Copy Branch Name, or other. #289824

  • Custom agent frontmatter now supports agent-scoped chat hooks. These hooks only run when the custom agent is selected or invoked via runSubagent. #299337

  • A new /troubleshoot slash command injects agent mode event logs into the chat context. Use it to ask the agent which customizations are loaded, how many tokens are consumed, debug instructions, and more. #299344

  • CLI sessions now support folder and repository isolation, in addition to worktree isolation. #299376

March 4, 2026

  • AI CLI terminal profiles now get a dedicated group in the terminal dropdown, making them easier to discover instead of being listed among other profiles. #293554

March 3, 2026

  • MCP Apps now support file downloads. #298836

  • Add Ctrl+F5 keyboard shortcut for a page refresh in the integrated browser, instead of unexpectedly opening the browser debugger. #291219

March 2, 2026

  • Added OpenTelemetry instrumentation support for Copilot Chat, enabling observability and performance tracing. #298834

  • Chat tips now only appear when a single chat session or the welcome view is visible, avoiding tips in sessions where multiple chat editors are open. #297759

  • After a user acts on or dismisses a chat tip, no more tips are shown for the rest of that session. #297682

  • You can now Go to Definition on localization placeholder strings (for example, %config.settingName%) in package.json to jump directly to the corresponding entry in package.nls.json. #297496

  • Selecting an agent plugin in the Extensions view now opens a detail view with an uninstall button, rendered README, and a list of contributed features. #297246

March 1, 2026

  • Markdown tables in chat now render with a horizontal scrollbar and improved column widths. #265062

February 28, 2026

  • Full-width CJK punctuation characters now render with consistent widths. #242138

  • Add a new foldedLine unit to the cursorMove command, enabling vim-like cursor movement that treats each folded region as a single step. #81498

February 27, 2026

  • Adjusts chat accessibility announcements so the accessibility.verboseChatProgressUpdates setting is respected, reducing unintended speech output (e.g. auto-synthesized TTS) for in-progress “thinking”/progress content. #296720

  • The confirm button in the ask_questions tool’s multi-page UI is repositioned on the last page to prevent overlap with the next-page arrow. #292404

  • Theme token customization now supports relative numeric values for font sizes and weights, instead of requiring absolute numbers. #285891


We really appreciate people trying our new features as soon as they are ready, so check back here often and learn what’s new.

Visual Studio February Update – Visual Studio Blog


This month’s Visual Studio update continues our focus on helping you move faster and stay in flow, with practical improvements across AI assistance, debugging, testing, and modernization. Building on the momentum from January’s editor updates, the February release brings smarter diagnostics and targeted support for real world development scenarios, from WinForms maintenance to C++ modernization.

All of the features highlighted are available in the Visual Studio 2026 Stable Channel as part of the February 2026 feature update (18.3). Please update to the latest version to try out these new features!

WinForms Expert Agent

The WinForms Expert agent provides a focused guide for handling key challenges in WinForms development. It covers several important areas:
Designer vs. regular code: Understand which C# features apply to designer-generated code and business logic.

  • Modern .NET patterns: Updated for .NET 8-10, including MVVM with Community Toolkit, async/await with proper InvokeAsync overloads, Dark mode with high-DPI support, and nullable reference types.
  • Layout: Advice on using TableLayoutPanel and FlowLayoutPanel for responsive, cross-device design.
  • CodeDOM serialization: Rules for property serialization and avoiding common issues with [DefaultValue] and ShouldSerialize*() methods.
  • Exception handling: Patterns for async event handlers and robust application-level error handling.

The agent serves as an expert reviewer for your WinForms code, providing comprehensive guidance on everything from naming controls to ensuring accessibility. The WinForms Agent is automatically implemented and included in the system prompt when necessary.

Smarter Test Generation with GitHub Copilot

Visual Studio now includes intelligent test generation with GitHub Copilot, making it faster to create and refine unit tests for your C# code. This purpose-built workflow works seamlessly with xUnit, NUnit, and MSTest.

GitHub Copilot Chat pane in Visual Studio showing a new chat thread. The Copilot Chat welcome screen appears with a message about checking accuracy, a prompt asking ‘generate tests for my entire solution,’ and the selected model labeled Claude Haiku 4.5. The input box includes a reference button and test generation command.

Simply type @Test in GitHub Copilot Chat, describe what you want to test, and Copilot generates the test code for you. Whether you’re starting fresh or improving coverage on existing projects, this feature helps you write tests faster without leaving your workflow.

Slash Commands for Custom Prompts

Invoke your favorite custom prompts faster using slash commands in Copilot Chat. Type / and your custom prompts appear at the top of the list, marked with a bookmark icon for easy identification.

Copilot Chat slash command menu in Visual Studio showing available commands such as quality check, clear, explain, fix, and generate, with Agent mode enabled and Claude Sonnet 4.5 selected in the chat input area.

We’ve also added two additional commands:

/generateInstructions: Automatically generate a copilot-instructions.md file for your repository using project context like coding style and preferences

/savePrompt: Extract a reusable prompt from your current chat thread and save it for later use via / commands

These shortcuts make it easier to build and reuse your workflow patterns.

C++ App Modernization

GitHub Copilot app modernization for C++ is now available in Public Preview. GitHub Copilot app modernization for C++ helps you update your C++ projects to use the latest versions of MSVC and to resolve upgrade-related issues. You can find our user documentation on Microsoft Learn.

Split view in Visual Studio showing a Markdown file on the left and a rendered preview on the right with an Executive Summary and Key Findings for an MSVC Build Tools upgrade including errors and warnings.

DataTips in IEnumerable Visualizer

You can now use DataTips in the IEnumerable Visualizer while debugging. Just hover over any cell in the grid to see the full object behind that value, the same DataTip experience you’re used to in the editor or Watch window.

When you hover over a cell, a DataTip shows all the object’s properties in one place. This makes it much easier to debug collections with complex or nested data. Whether it’s a List<T> of objects or a dictionary with structured values, one hover lets you quickly inspect everything inside.

Visual Studio IEnumerable Visualizer showing the expression lpvm.Posts. A table displays one row with columns for PostViewModel properties, including Categories with a count of one, AllCategories, NewCategory, and AllowComments set to True. A tooltip shows a CategoryViewModel object with an option to view raw data.

Analyze Call Stack with Copilot

You can now Analyze Call Stack with Copilot to help you quickly understand what your app is doing when debugging stops. When you pause execution, you can select Analyze with Copilot in the Call Stack window. Copilot reviews the current stack and explains why the app isn’t progressing whether the thread is waiting on work, looping, or blocked by something.

This makes the call stack more than just a list of frames. It becomes a helpful guide that shows what’s happening in your app so you can move faster toward the real fix.

Profiler agent with Unit Test support

The Profiler Agent (@profiler) now works with unit tests. You can use your existing tests to check performance improvements, making it easier to measure and optimize your code in more situations. The agent can discovers relevant unit tests/BenchmarkDotNet benchmarks that exercise performance-critical code paths.

If no good tests or benchmarks are available, it automatically creates a small measurement setup so you can capture a baseline and compare results after changes. This unit-test-focused approach also makes the Profiler Agent useful for C++ projects, where benchmarks aren’t always practical, but unit tests often already exist.

GitHub Copilot Chat showing profiler suggestion to optimize code step identify scope message requesting permission to run CPU performance profiler with confirm and deny buttons and model selector visible

Faster and More Reliable Razor Hot Reload

Hot Reload for Razor files are now faster and more reliable. By hosting the Razor compiler inside the Roslyn process, edits to .razor files apply more quickly and avoid delays that previously slowed Blazor workflows. We also reduced the number of blocked edits, with more changes now applying without requiring a rebuild, including file renames and several previously unsupported code edits. When a rebuild is still required, Hot Reload can now automatically restart the app instead of ending the debug session, helping you stay in flow.

We are continuing to invest in features that help you understand, test, and improve existing code, not just write new code. Try these updates in the Visual Studio 2026 Stable Channel and let us know what is working well and where we can improve. Your feedback directly shapes what we build next.

Custom Agents in Visual Studio: Built in and Build-Your-Own agents


Agents in Visual Studio now go beyond a single general-purpose assistant. We’re shipping a set of curated preset agents that tap into deep IDE capabilities; debugging, profiling, testing alongside a framework for building your own custom agents tailored to how your team works.

Each preset agent is designed around a specific developer workflow and integrates with Visual Studio’s native tooling in ways that a generic assistant can’t.

  • Debugger – Goes beyond “read the error message.” Uses your call stacks, variable state, and diagnostic tools to walk through error diagnosis systematically across your solution.
  • Profiler – Connects to Visual Studio’s profiling infrastructure to identify bottlenecks and suggest targeted optimizations grounded in your codebase, not generic advice.
  • Test – (when solution is loaded) Generates unit tests tuned to your project’s framework and patterns, not boilerplate that your CI will reject.
  • Modernize (.NET and C++ only) -Framework and dependency upgrades with awareness of your actual project graph. Flags breaking changes, generates migration code, and follows your existing patterns.

Access them through the agent picker in the chat panel or using ‘@’ in chat.

The presets cover workflows we think matter most, but your team knows your workflow better than we do. Custom agents let you build your own using the same foundation—workspace awareness, code understanding, tools accessed by your prompts, your preferred model, and your tools.

Where it gets powerful is MCP. You can connect custom agents to external knowledge sources internal documentation, design systems, APIs, and databases so the agent isn’t limited to what’s in your repo.

A few patterns we’re seeing from teams:

  • Code review that checks PRs against your actual conventions, connected via MCP to your style guide or ADR repository
  • Design system enforcement connected to your Figma files or component libraries to catch UI drift before it ships
  • Planning helps you think through a feature or task before any code is written. Gathers requirements, asks clarifying questions, and builds out a plan that you can hand off

The awesome-copilot repo has community-contributed agent configurations you can use as starting points.

Get started

Custom agents are defined as .agent.md files in your repository’s .github/agents/ folder:

your-repo/
└── .github/
    └── agents/
        └── code-reviewer.agent.md

A few things to note:

  • This is a preview feature; the format of these files may change over to support different capabilities
  • If you don’t specify a model, the agent uses whatever is selected in the model picker
  • Tool names vary across GitHub Copilot platforms- check the tools available in Visual Studio specifically to make sure your agent works as expected
  • Configurations from the awesome-copilot repo are a great starting point, but verify tool names before using them in VS

Tell us what you’re building

Share your configurations in the awesome-copilot repo or file feedback here.

asp.net – IIS express serves one .asmx service, but always returns 404 on another


I upgraded an old solution to Visual Studio 2015, and ran into trouble with one of the .asmx web services. I have a number of them in this solution, and IIS express loads and runs them all correctly, except one. This one service always returns a 404 if I try to load the .asmx URL in IIS express. It runs correctly in full-blown IIS.

None of the problems I’ve found online solve my situation. Here are the details:

  1. The web.config files of the working and non-working web services are identical.
  2. The IIS express/.csproj configuration is identical, except for the port and project names, and a few different assembly references.
  3. The projects are virtually identical as well, with only a single .asmx service file with a code behind .asmx.cs and .asmx.resx files.
  4. The C:\Users[my username]\IIS Express\config\applicationhost.config doesn’t list any of the working or non-working services, so that can’t be the difference.
  5. Examining the complete trace from C:\Users[my username]\IIS Express\TraceLogFiles[service]\fr000040.xml for a request on working and non-working services, they are virtually identical right up to step 109, AspNetMapHandlerEnter. The working service goes right to AspNetMapHandlerLeave at step 110, where the non-working service sets a few cache headers and then at step 112 sets the status code to 404 with a “warning” label on it.

I’m not sure what’s going on here or what I can do next, so any suggestions would be much appreciated.

How to configure Visual Studio 2022 to sign Git commits for GitHub using SSH or GPG keys?


I am having trouble configuring Visual Studio 2022 to sign Git commits for GitHub using SSH or GPG keys.

I tried setting up GPG signing using the following commands:

git config --global user.signingkey 3AA5C34371567BD2
git config --global commit.gpgsign true

However, when I try to make a commit in Visual Studio.

I get the following error:

Your Git hook is not supported. This is probably because the first line is not "#!/bin/sh".

I checked all the files in the .git/hooks directory and they all have #!/bin/sh as the first line.

If I don’t set git config --global commit.gpgsign true, the commit goes through but on GitHub it is marked as unverified.

How can I properly configure Visual Studio to sign Git commits for GitHub using SSH or GPG keys? Any help would be greatly appreciated.

Update:

I found a solution for signing commits with SSH in Visual Studio 2022. Here are the steps I followed:

  1. Edit the gitConfig file using the command git config --global --edit.
  2. Add the following line: [gpg "ssh"] sshCommand = C:/Windows/System32/OpenSSH/ssh.exe.
  3. Run the following commands:
    git config --global gpg.format ssh,
    git config --global user.signingkey /C:/Users/Admin/.ssh/id_signinged25519.pub and
    git config --global commit.gpgsign true.

Now, when I do a PUSH with Visual Studio, the commits have the verified label in GitHub.

c# – How to activate installation of an apk through my MAUI app


.Net 9.0 maui – 2026

AndroidManifest.xml

<!-- to App update apk -->
<provider
  android:name="androidx.core.content.FileProvider"
  android:authorities="com.companyname.xxxx.fileprovider"
  android:exported="false"
  android:grantUriPermissions="true">
  <meta-data
        android:name="android.support.FILE_PROVIDER_PATHS"
        android:resource="@xml/file_path"/>
</provider>

add /Android/Resources/xml/file_path.xaml

<paths>  
    <files-path name="my_files" path="." />    
    <cache-path name="my_cache" path="." />   
    <external-path name="my_external" path="." />    
    <root-path name="root" path="." />
</paths>

add clsse /Android
//https://learn.microsoft.com/en-us/answers/questions/2141510/how-to-update-version-in-apk-maui-application-auto

public static class apkInstallHelper
{

     private const int RequestCode = 200;
       public static void InstallAPK(string filepath)
       {

        if (Build.VERSION.SdkInt >= BuildVersionCodes.R && !Platform.CurrentActivity.PackageManager.CanRequestPackageInstalls())

        {
            Intent intent = new Intent(Settings.ActionManageUnknownAppSources);

            intent.SetData(Uri.Parse("package:" + Platform.CurrentActivity.PackageName));

            Platform.CurrentActivity.StartActivityForResult(intent, RequestCode);
        }

        installApk(filepath);
    }

private static void installApk(string filepath)
{

        Java.IO.File file = new Java.IO.File(filepath);

        Intent intent = new Intent(Intent.ActionView);
        Uri apkUri = FileProvider.GetUriForFile(Platform.CurrentActivity, "com.companyname.micropresdev.fileprovider", file);

        intent.SetDataAndType(apkUri, "application/vnd.android.package-archive");
        intent.SetFlags(ActivityFlags.NewTask | ActivityFlags.GrantReadUriPermission);
        Platform.CurrentActivity.StartActivity(intent);
    }
}

go to update:

{
            string urlS3 = "https://your server/update.apk"; 
            string caminhoPrivado = Path.Combine(FileSystem.AppDataDirectory, "update.apk");

        try
        {
               using var client = new HttpClient();

   
               var data = await client.GetByteArrayAsync(urlS3);

              await File.WriteAllBytesAsync(caminhoPrivado, data);

              MicroPresDEV.Platforms.Droid.apkInstallHelper.InstallAPK(caminhoPrivado);
    


           }
            catch (Exception ex)
            {
               await Shell.Current.DisplayAlert("Erro S3", "Falha no download: " +  ex.Message, "OK");
         }


}

visual studio – One .hlsl to many .cso


This is a visual studio problem not a MSBuild problem. So the solution is just to factor out the duplicate items into a separate MSBuild file

.vcxproj

<Import Project="MyShader.targets" />

MyShader.targets

<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <ItemGroup>
    <FxCompile Include="MyShader.hlsl">
      <PreprocessorDefinitions>USE_X=1</PreprocessorDefinitions>
      <ObjectFileOutput>MyShader_USEX.cso</ObjectFileOutput>
    </FxCompile>

    <FxCompile Include="MyShader.hlsl">
      <PreprocessorDefinitions>USE_Y=1</PreprocessorDefinitions>
      <ObjectFileOutput>MyShader_USEY.cso</ObjectFileOutput>
    </FxCompile>

    <FxCompile Include="MyShader.hlsl">
      <PreprocessorDefinitions>USE_X=1;USE_Y=1;USE_Z=1</PreprocessorDefinitions>
      <ObjectFileOutput>MyShader_XYZ.cso</ObjectFileOutput>
    </FxCompile>
  </ItemGroup>
</Project>

I have a python script to generate the .targets file


Edit:

My project didn’t load when I opened up visual studio this morning 🤦‍♂️. I’ve opened a ticket

https://developercommunity.visualstudio.com/t/Cannot-load-project-with-an-import-with/11045727

I’m just going to have my python script generate a file for each permutation.

visual studio – Conditionally include .cpp file per Configuration|Platform and hide it completely from Solution Explorer


I’m using MSVC with a .vcxproj project.

I want a specific file (xyz.cpp) to:

  • Be included only for a specific Configuration|Platform (e.g. Debug|x64)

  • Be completely absent in other configurations

  • Not appear in Solution Explorer

  • Not be searchable in the project when not active

I do not want to use <ExcludedFromBuild> because the file still shows up in the editor and search, which creates noise.

So far I tried conditioning the ItemGroup in the .vcxproj:

<ItemGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
  <ClCompile Include="path\to\xyz.cpp" />
</ItemGroup>

However:

  • Switching configurations does not update the UI

There are no other explicit references to xyz.cpp in the project file (as far as I can see).

Question:

What is the correct way in a C++ .vcxproj project to conditionally include a source file so that it is completely hidden from the project (including Solution Explorer) when the configuration/platform condition does not match?

Is this even supported reliably by the VC project system?