Bringing together two very different ways of building AI applications is a challenging task. Has Microsoft bitten off more than it can chew?
Earlier this year, Microsoft said it would bring its two different agent development platforms together, merging Semantic Kernel and AutoGen. It has now launched that merged platform as the Microsoft Agent Framework. At the time of the announcement, I noted that it made sense; Semantic Kernel was an in-production AI workflow engine that had evolved to support new agentic models, while AutoGen came from a research background and offered new ways to build multi-agent applications without writing code.
Both tools were open source, so itโs not surprising that Microsoft Agent Framework is too, developed in the open on GitHub, with sample code ready for experiments in your own systems or in a ready-to-try Codespace virtual development environment. This approach has allowed it to quickly adopt new agent development practices as theyโve become popular, adding support for Model Context Protocol (MCP), Agent2Agent, and more, as well as allowing you to choose your own AI models and providers.
The agent of workflow
Workflow is still at the heart of the new framework. Building on the strengths of the Semantic Kernel and AutoGen agent implementations, the new framework offers support for workflow orchestration and agent orchestration. Workflow orchestration builds on Semantic Kernel and implements existing business processes and logic, calling a chain of agents as needed, constructing prompts using predefined formats, and populating values using results from earlier calls. Meanwhile, agent orchestration uses AutoGenโs LLM-driven approach to dynamically create chains of agents based on open-ended prompts. Both approaches have their role to play (and can be embedded in each other).
Agent orchestration is probably the most interesting part of this first release, as it supports several different orchestration models that provide support for different types of workflow. The simplest option is sequential orchestration. Agents are called one at a time, waiting for the response from the first agent before using it to build the prompt for the next. More complex scenarios can use concurrent orchestration. The initial query data calls several agents at the same time, working in parallel, moving on to the next workflow phase once all the agents have responded. Many of these orchestration models are drawn directly from traditional workflow processes, much like those used by tools like BizTalk. The remaining orchestration models are new and depend on the behavior of LLM-based agents.
Orchestration in a world of language models
The first new model is group chat orchestration. The agents in a process can communicate with each other, sharing results and updating based on that data until they converge on a single response. The second, hand-off orchestration, is a more evolved version of sequential orchestration, where not only does the data passed between agents update, so do the prompts, responding to changes in the context of the workflow. Finally, thereโs support for whatโs being called โmagenticโ workflow. This implements a supervisory manager agent that coordinates a subset of agents, orchestrating them as needed and bringing in humans where necessary. This last option is intended for complex problems, which may not have been considered for process automation using existing non-AI techniques.
These approaches are quite different from how weโve built workflows in the past, and itโs important to experiment before deploying them. Youโll need to be careful with base prompts, ensuring that operations terminate and that if an answer or a consensus canโt be found, your agents will generate a suitable error message rather than simply generating a plausible output.
Microsoft intends its Agent Framework to act as a bridge between its various agent products, from the low-code Copilot Studio to the high-end Azure AI Foundry (which provides a host for Fabricโs data agents), as well as to agents built using Microsoft 365โs tool.
Agent-powered business processes, anywhere
Youโre not limited to Azure; agents are able to run anywhere, from on-premises to any public cloud, with support for container-based portability. The same goes for connections to services and data. One key element of Microsoftโs approach to agent development is its support for OpenAPI definitions. If a service provides an API description using this standard (or its predecessor, Swagger), then the framework uses it to call the API as part of an agent.
Along with the Agent Framework comes an updated version of Visual Studio Codeโs AI Toolkit. This is where things get very interesting. If youโve got a PC with an NPU (for example, one of the Arm-powered Copilot+ PCs, like recent Surfaces), you can write agents that use local NPU-ready small language models (SLMs) rather than cloud-hosted LLMs.
For now, there arenโt many agent-ready SLMs available with support for tool integration, but the fact that Microsoft is using this approach with its Mu SLM to provide a Settings agent in Windows should encourage the release of NPU-optimized models with support for common runtimes. Hopefully, the public availability of tools like this will add pressure on vendors and encourage Microsoft to give Mu a public release.
Bringing old code forward
Even as Agent Framework brings new capabilities to Microsoftโs AI orchestration platforms, it still needs to be able to support migrating existing code from both Semantic Kernel and AutoGen. For C# code running on Semantic Kernel, you need to move to new .NET namespaces, including the Microsoft.Extensions.AI core building blocks, and make some changes to how it works with LLMs and with plug-ins. Itโs important to remember that youโre now orchestrating agents and working with external tools via APIs and protocols like MCP. These changes mean that plug-ins are replaced by tools (or as MCP servers), and the core Kernel function is now a set of agents.
Itโs not an instant migration for Semantic Kernel or for AutoGen, but it is a credible pathway. Itโs also essential. Although the existing platforms will still get support, future development is focusing on the new tools. Bringing two platforms into one is a logical choice; the field continues to move fast, and itโs clear that using AI-powered agents to manage context in long business processes is becoming the primary enterprise use case for both LLMs and SLMs outside of managing natural language interfaces.
Building agents and workflows
So, what is it like to build a new Agent Framework application? Working in .NET, youโll need .NET 9 or later, as well as access to models, either local or hosted. You can use Azure AI Foundry models or GitHub models to get a quick start, installing the Agent Framework using the .NET CLI. Most of what you need is packaged in Microsoft.Agents.AI, with Microsoft.Extensions.AI managing access to models.
Building a new agent is straightforward. Start with a chat client interface that oversees connections to your chosen model. This can then be called by an agent method, providing a name and a base prompt that serves as the agent instructions. Finally, you can run the agent youโve created, using an asynchronous call.
The client interface is key to standardizing agent development as it provides the necessary abstractions for working with any model. You can swap out cloud and local models or Azure OpenAI and GitHub Models without having to change your agent code. Once you have an agent interface, you can reuse it with different instructions as part of a workflow.
This is the point where the Agent Frameworkโs orchestration tools come into action. The AgentWorkflowBuilder defines the type of orchestration you plan to use and the agents that it will use, so a sequential workflow takes the output of one agent and feeds it into the next. Agents can then be attached to tools, which add specific structure to inputs and outputs, or to external services and sources, using existing MCP servers or other APIs.
One useful aspect of this approach is that youโre building .NET code, so it hosts, runs, and deploys exactly the same way as your existing code. You can even take advantage of approaches like .NET Aspire to build it into distributed applications that work with Azure and other services.
Itโs early days for Microsoftโs Agent Framework, but it does seem that Microsoft has managed to pull off the integration of two very different approaches to agent orchestration. You get the best of both worldsโand a pathway for experimentation that lets you build whatโs right for your business. With an evolving ecosystem and developing standards, itโs a pragmatic approach that helps future-proof a growing enterprise AI platform.


