azure ai
227 TopicsAI for Personalized Government Services: Building Trust and Inclusivity in Cities
Cities today are under unprecedented pressure. Residents expect services that are fast, accessible, and tailored to their needs, yet many local governments still rely on fragmented systems and manual processes that create long queues and frustration. In a digital-first society, these gaps are no longer acceptable. Artificial intelligence (AI) offers a transformative opportunity to close them, enabling governments to deliver personalized, proactive, and inclusive citizen experiences. On December 4, Smart Cities World Connect will host a Trend Report Panel Discussion bringing together city leaders, technology experts, and public sector innovators to explore how AI can reshape the citizen experience. This virtual event will highlight practical strategies for responsible AI adoption and showcase lessons from pioneering cities worldwide. Register today: Trend Report Panel Discussion (4 Dec) Why AI Matters for Cities Urban populations are growing, budgets remain tight, and climate and social pressures are mounting. Against this backdrop, AI is emerging as a critical enabler for smarter governance. By integrating AI into service delivery, cities can: Support improved wait times through AI-powered assistants and multilingual agents. Deliver proactive services using unified data and predictive analytics. Ensure equity by extending digital access to underserved communities. Build trust through transparent governance and responsible AI deployment. These capabilities are no longer theoretical. Cities from Abu Dhabi to Singapore are already embedding AI into core operations—modernizing citizen portals, automating case management, and using digital twins to plan with foresight. The panel will explore five essential areas for AI-driven transformation: 1. Smarter Citizen Engagement AI-powered virtual assistants and chatbots can handle routine inquiries, guide residents through complex processes, and provide real-time updates—across multiple languages and platforms. This not only reduces queues but also makes services more inclusive for diverse communities. 2. Proactive, Personalized Services Unified data platforms and predictive analytics allow governments to anticipate citizen needs, whether it’s notifying residents about benefit eligibility or streamlining license renewals. By moving from reactive to proactive service delivery, cities can improve satisfaction and reduce backlogs. 3. Equity at the Core Efficiency must never come at the expense of fairness. AI-enabled systems should be designed to reach underserved populations, bridging the digital divide and ensuring that innovation benefits all residents, not just the most connected. 4. Governance and Trust Responsible AI adoption requires robust frameworks for transparency, data protection, and ethical oversight. Cities must implement clear governance models, conduct algorithmic audits, and engage communities in co-design to maintain public trust. 5. Practical Steps for Integration From piloting high-impact use cases to building cross-department governance and investing in workforce training, the discussion will outline actionable steps for scaling AI responsibly. Partnerships with industry and academia will also play a vital role in accelerating adoption. Lessons from Frontier Cities Several global examples illustrate what’s possible: Manchester City Council is advancing smart urban living through AI-driven planning and operations, using integrated data platforms and predictive analytics to optimize city services, improve sustainability, and enhance citizen engagement across transport, housing, and community programs Abu Dhabi’s TAMM platform, powered by Microsoft Azure OpenAI, delivers nearly 950 government services through a single digital hub, simplifying processes and enabling personalized interactions. Singapore’s Virtual Singapore project uses AI and digital twins to simulate urban scenarios, helping planners make evidence-based decisions on mobility, safety, and climate resilience. Bangkok’s Traffy Fondue civic platform leverages AI to categorize citizen reports and route them to the right department, reducing administrative overhead and improving response times. These cases demonstrate that AI is not just a tool for efficiency, it’s a catalyst for inclusion, resilience, and trust. What Attendees Will Gain By joining the December 4 session, city leaders will leave with: A clear understanding of AI’s transformative potential for improving citizen satisfaction and reducing service backlogs. Real-world examples of successful deployments in citizen portals, case management, and service automation. Insights into ethical and regulatory considerations critical to building trust in personalized government services. Guidance on preparing organizations to adopt and scale AI effectively. Looking Ahead Cities that thrive in the coming decade will be those that combine strategic vision with disciplined, trustworthy use of technology. AI can help governments deliver services that are smarter, more inclusive, and more responsive to the needs of every resident, but success depends on strong governance, cross-sector collaboration, and a commitment to equity. To learn more and register for the Trend Report Panel Discussion on December 4.25Views0likes0CommentsUshering in the next era of agentic AI with tools in Microsoft Foundry
Models are limited to their own knowledge and reasoning capabilities—they can’t access real-time data or perform actions on their own. Tools are what make agents truly powerful. By connecting agents to live information, enabling automation, and integrating with the apps and services you use every day, tools transform agents from passive responders into active problem-solvers. With tools, agents can deliver faster, smarter, and more connected experiences that drive real results. Microsoft Foundry is now your central hub for discovering, testing, and integrating powerful AI tools—designed to accelerate every stage of your development journey. Whether you’re prototyping new ideas, optimizing production workflows, or extending the capabilities of your AI solutions, Microsoft Foundry puts everything you need at your fingertips. We are excited to announce the following capabilities to empower your experience of building agents with tools: Discover from Foundry Tools(preview): Browse a growing, curated, enterprise-ready list of 1400+ Microsoft and partner-provided tools, covering industries from databases, developer tools, analytics and more. Build your organization’s tool catalog (preview): Build and manage a tools catalog for your own enterprise with your private and custom tools. Enhanced Enterprise Support Connect MCP servers and A2A endpoints with comprehensive authentication support, via OAuth identity passthrough, Microsoft Entra Agent Identity, Microsoft Entra Managed Identity and more Govern your tools via AI Gateway (preview) with customizable policies Leverage Azure Policy to enforce security behaviors Bring A2A endpoint to Foundry Agent Service(preview): via A2A tool, you can easily bring A2A endpoint to Foundry Agent Service with comprehensive authentication support Foundry Tools (preview) Building AI agents isn’t just about intelligence—it’s about action. To deliver real business value, agents need to connect with the systems where work happens, access enterprise data securely, and orchestrate workflows across tools your teams already use. That’s where Foundry Tools in Microsoft Foundry come in. With 1400+ officially curated tools from Microsoft and trusted partners, the Foundry Tools give developers and enterprises everything they need to build agents that are powerful, integrated, and ready for scale. "Celonis and Microsoft are creating the tech stack for the AI-driven, composable enterprise. By bringing the Celonis MCP Server into the Foundry Tool Catalog, we’re enabling organizations to infuse real-time process intelligence directly into their AI workflows. This integration empowers AI agents to understand business processes in context and take intelligent action through scalable, process-aware automation. Together we’re helping customers transform and continuously improve operations, delivering meaningful ROI." — Dan Brown, Chief Product Officer, Celonis “Integrating Sophos Intelix with Foundry brings threat intelligence directly into the heart of security workflows. By embedding real-time file, URL, and IP reputation insights into AI agents built in Foundry, we’re enabling analysts to make faster, smarter decisions without leaving their existing tools. This collaboration with Microsoft transforms incident response by combining domain-specific cybersecurity expertise with the power of AI, helping security teams stay ahead of evolving threats.” — Simon Reed, Chief Research and Scientific Officer, Sophos "As AI agents reshape enterprise workflows, organizations increasingly need to connect data across multiple platforms that have historically operated in isolation,” said Ben Kus, Chief Technology Officer at Box. “By integrating with Microsoft Foundry, Box is deepening its partnership with Microsoft to enable developers to build sophisticated agents on Azure that can intelligently leverage customers’ enterprise content in Box. Instead of relying on complex, one-off integrations to access proprietary data, the Box MCP Server provides a standardized bridge, empowering the use of interoperable agents while maintaining Box’s enterprise-grade security and permissions.” We are excited to bring tools in different categories in Foundry Tools: Databases: Fuel Intelligent Decisions Agents thrive on data. With tools connecting to data in Azure Cosmos DB, Azure Databricks Genie, Azure Managed Redis, Azure Databases for PostgreSQL, Azure SQL, Elastic, MongoDB, and Pinecone, your agents can query, analyze, and act on your own data —powering smarter decisions and automated insights. All through natural language, without compromising enterprise security or compliance. Agent 365 MCP Servers: Where Work Happens Starting at Ignite this year, we are excited to bring Agent 365 MCP servers to select Frontier customers. Agent 365 MCP servers are the backbone of enterprise-grade agentic automation, seamlessly connecting Outlook Calendar, Outlook Email, SharePoint, Teams, and a growing suite of productivity and collaboration tools to digital worker agents built in Foundry and other leading platforms. With the Agent 365 Tooling Gateway, agent builders gain secure, scalable access to certified MCP servers—each designed to unlock rich, governed interactions with core business data while enforcing IT policies and compliance at every step. Whether orchestrating meetings, managing documents, or collaborating across channels, agents leverage these servers to deliver intelligent workflows with built-in observability, audit trails, and granular policy enforcement. This unified approach fulfills the Agent 365 promise: empowering organizations to innovate confidently, knowing every agent action is traceable, compliant, and ready to scale across the enterprise. Developer Tools: Build Faster, Deploy Smarter Accelerate your agent development lifecycle with tools like GitHub, Vercel, and Postman. From version control to API testing and front-end deployment, these integrations help you ship high-quality experiences faster. Custom tools from Azure Logic Apps Connectors: Automate at Scale Agents often need to orchestrate multi-step workflows across diverse systems. With hundreds of Azure Logic Apps connectors, you can automate approvals, sync data, and integrate SaaS apps without writing custom code. Build your organization tool catalog (preview) In large enterprises, the teams that build APIs and tools are often separate from those that consume them to create AI agents, leading to friction, delays, and governance challenges. The private tool catalog, powered by Azure API Center, solves this by providing a single, secure hub where internal tools are published, discovered, and managed with confidence. Key features include: Centralized publishing with metadata, authentication profiles, and version control. Easy discovery for agent developers, reducing integration time from weeks to minutes. With this organizational catalog, enterprises can turn internal capabilities into enterprise-ready AI solutions, accelerating innovation while maintaining control. Enhanced Enterprise Support for Tools As enterprises build AI agents that connect to a growing universe of APIs and agent endpoints, robust authentication and governance are non-negotiable. Microsoft Foundry delivers comprehensive support for tools and protocols, such as Model Context Protocol (MCP), A2A, and OpenAPI, ensuring every integration meets your organization’s security and compliance standards. Flexible Authentication for Every Enterprise Scenario Foundry supports a full spectrum of authentication methods for MCP servers and A2A endpoints—including key-based, Microsoft Entra Agent Identity, Microsoft Entra Foundry Project Managed Identity, and OAuth Identity Passthrough. This flexibility means you can choose shared authentication for organization-wide access, or individual authentication to persist user context and enforce least-privilege access. For example, OAuth Identity Passthrough allows users to sign into the MCP server and grant the agent access to their credentials, while Microsoft Entra Agentic Identity and Managed Identity enable seamless, secure service-to-service connections. AI Gateway: Governance and Policy Enforcement (preview) For advanced governance, Foundry integrates with the AI Gateway powered by Azure API Management (APIM). Once an AI Gateway is integrated with the Foundry resource, all eligible MCP servers will automatically be governed by the AI Gateway with admin-configured custom policies. This ensures that every tool invocation adheres to enterprise-grade governance—without requiring any manual configuration from developers. When enabled, admins can govern tool traffic to be routed through the AI Gateway, where enterprise policies are enforced at runtime. AI Gateway enforces powerful enterprise controls at runtime, including: Authentication and Authorization: Enforce OAuth, subscription key, or IP-based authentication, with token validation and RBAC controls. Traffic Management: Apply quotas, rate limits, and throttling to control usage and prevent abuse. Security and Compliance: Add data loss prevention, request/response validation, and custom policies for sensitive workloads. Observability: Capture unified telemetry for every tool invocation, with integrated logging and metrics through Azure Monitor and Foundry analytics. Policy Management: Admins can manage and update policies in the Azure API Management portal. A2A tool in Foundry Agent Service (preview) The Foundry Agent Service introduces a powerful pattern: you can add an A2A compatible agent to your Foundry agents simply by adding it via A2A tool. With A2A tools, your Foundry agent can securely invoke another A2A compatible agent’s capabilities—enabling modular, reusable workflows across teams and business units. This approach leverages comprehensive authentication options, including OAuth passthrough, Microsoft Entra to ensure every agent call is authorized, auditable, and governed by enterprise policy. Connecting agents via A2ATool VS Multi-agent Workflow Via A2A Tool: When Agent A calls Agent B via a tool, Agent B's answer is passed back to Agent A, which then summarizes the answer and generates a response to the user. Agent A retains control and continues to handle future user input. Via Workflow: When Agent A calls Agent B via Workflow or other multi-agent orchestration, the responsibility of answering the user is completely transferred to Agent B. Agent A is effectively out of the loop. All subsequent user input will be answered by Agent B. Get started today Start building with tools for intelligent agents today with Microsoft Foundry. If you’re attending Microsoft Ignite 2025, or watching on-demand content later, be sure to check out these sessions: Innovation Session: Build & Manage AI Apps with Your Agent Factory AI agents in Microsoft Foundry, ship fast, scale fearlessly AI powered automation & multi-agent orchestration in Microsoft Foundry AI builder’s guide to agent development in Foundry Agent Service To learn more, visit Microsoft Learn and explore resources including the AI Agents for Beginners, Microsoft Agent Framework, and course materials that help you build and operate agents responsibly.162Views0likes0CommentsExpanded Models Available in Microsoft Foundry Agent Service
Announcement Summary Foundry Agent Service now supports an expanded ecosystem of frontier and specialist models. Access models from Anthropic, DeepSeek AI, Meta, Microsoft, xAI, and more. Avoid model lock-in and choose the best model for each scenario. Build complex, multimodal, multi-agent workflows at enterprise scale. From document intelligence to operational automation, Microsoft Foundry makes AI agents ready for mission-critical workloads.134Views0likes0CommentsPublishing Agents from Microsoft Foundry to Microsoft 365 Copilot & Teams
Better Together is a series on how Microsoft’s AI platforms work seamlessly to build, deploy, and manage intelligent agents at enterprise scale. As organizations embrace AI across every workflow, Microsoft Foundry, Microsoft 365, Agent 365, and Microsoft Copilot Studio are coming together to deliver a unified approach—from development to deployment to day-to-day operations. This three-part series explores how these technologies connect to help enterprises build AI agents that are secure, governed, and deeply integrated with Microsoft’s product ecosystem. Series Overview Part 1: Publishing from Foundry to Microsoft 365 Copilot and Microsoft Teams Part 2: Foundry + Agent 365 — Native Integration for Enterprise AI Part 3: Microsoft Copilot Studio Integration with Foundry Agents This blog focuses on Part 1: Publishing from Foundry to Microsoft 365 Copilot—how developers can now publish agents built in Foundry directly to Microsoft 365 Copilot and Teams in just a few clicks. Build once. Publish everywhere. Developers can now take an AI agent built in Microsoft Foundry and publish it directly to Microsoft 365 Copilot and Microsoft Teams in just a few clicks. The new streamlined publishing flow eliminates manual setup across Entra ID, Azure Bot Service, and manifest files, turning hours of configuration into a seamless, guided flow in the Foundry Playground. Simplifying Agent Publishing for Microsoft 365 Copilot & Microsoft Teams Previously, deploying a Foundry AI agent into Microsoft 365 Copilot and Microsoft Teams required multiple steps: app registration, bot provisioning, manifest editing, and admin approval. With the new Foundry → M365 integration, the process is straightforward and intuitive. Key capabilities No-code publishing — Prepare, package, and publish agents directly from Foundry Playground. Unified build — A single agent package powers multiple Microsoft 365 channels, including Teams Chat, Microsoft 365 Copilot Chat, and BizChat. Agent-type agnostic — Works seamlessly whether you have a prompt agent, hosted agent, or workflow agent. Built-in Governance — Every agent published to your organization is automatically routed through Microsoft 365 Admin Center (MAC) for review, approval, and monitoring. Downloadable package — Developers can download a .zip for local testing or submission to the Microsoft Marketplace. For pro-code developers, the experience is also simplified. A C# code-first sample in the Agent Toolkit for Visual Studio is searchable, featured, and ready to use. Why It Matters This integration isn’t just about convenience; it’s about scale, control, and trust. Faster time to value — Deliver intelligent agents where people already work, without infrastructure overhead. Enterprise control — Admins retain full oversight via Microsoft 365 Admin Center, with built-in approval, review and governance flows. Developer flexibility — Both low-code creators and pro-code developers benefit from the unified publishing experience. Better Together — This capability lays the groundwork for Agent 365 publishing and deeper M365 integrations. Real-world scenarios YoungWilliams built Priya, an AI agent that helps handle government service inquiries faster and more efficiently. Using the one-click publishing flow, Priya was quickly deployed to Microsoft Teams and M365 Copilot without manual setup. This allowed Young Williams’ customers to provide faster, more accurate responses while keeping governance and compliance intact. “Integrating Microsoft Foundry with Microsoft 365 Copilot fundamentally changed how we deliver AI solutions to our government partners,” said John Tidwell, CTO of YoungWilliams. “With Foundry’s one-click publishing to Teams and Copilot, we can take an idea from prototype to production in days instead of weeks—while maintaining the enterprise-grade security and governance our clients expect. It’s a game changer for how public services can adopt AI responsibly and at scale.” Availability Publishing from Foundry to M365 is in Public Preview within the Foundry Playground. Developers can explore the preview in Microsoft Foundry and test the Teams / M365 publishing flow today. SDK and CLI extensions for code-first publishing are generally available. What’s Next in the Better Together Series This blog is part of the broader Better Together series connecting Microsoft Foundry, Microsoft 365, Agent 365, and Microsoft Copilot Studio. Continue the journey: Foundry + Agent 365 — Native Integration for Enterprise AI (Link) Start building today [Quickstart — Publish an Agent to Microsoft 365 ] Try it now in the new Foundry Playground202Views0likes0CommentsFoundry Agent Service at Ignite 2025: Simple to Build. Powerful to Deploy. Trusted to Operate.
The upgraded Foundry Agent Service delivers a unified, simplified platform with managed hosting, built-in memory, tool catalogs, and seamless integration with Microsoft Agent Framework. Developers can now deploy agents faster and more securely, leveraging one-click publishing to Microsoft 365 and advanced governance features for streamlined enterprise AI operations.2.8KViews0likes1CommentFoundry IQ: Unlocking ubiquitous knowledge for agents
Introducing Foundry IQ by Azure AI Search in Microsoft Foundry. Foundry IQ is a centralized knowledge layer that connects agents to data with the next generation of retrieval-augmented generation (RAG). Foundry IQ includes the following features: Knowledge bases: Available directly in the new Foundry portal, knowledge bases are reusable, topic-centric collections that ground multiple agents and applications through a single API. Automated indexed and federated knowledge sources – Expand what data an agent can reach by connecting to both indexed and remote knowledge sources. For indexed sources, Foundry IQ delivers automatic indexing, vectorization, and enrichment for text, images, and complex documents. Agentic retrieval engine in knowledge bases – A self-reflective query engine that uses AI to plan, select sources, search, rank and synthesize answers across sources with configurable “retrieval reasoning effort.” Enterprise-grade security and governance – Support for document-level access control, alignment with existing permissions models, and options for both indexed and remote data. Foundry IQ is available in public preview through the new Foundry portal and Azure portal with Azure AI Search. Foundry IQ is part of Microsoft's intelligence layer with Fabric IQ and Work IQ.7.2KViews1like0CommentsPantone’s Palette Generator enhances creative exploration with agentic AI on Azure
Color can be powerful. When creative professionals shape the mood and direction of their work, color plays a vital role because it provides context and cues for the end product or creation. For more than 60 years, creatives from all areas of design—including fashion, product, and digital—have turned to Pantone color guides to translate inspiration into precise, reproducible color choices. These guides offer a shared language for colors, as well as inspiration and communication across industries. Once rooted in physical tools, Pantone has evolved to meet the needs of modern creators through its trend forecasting, consulting services, and digital platform. Today, Pantone Connect and its multi-agent solution called the Pantone Palette Generator seamlessly bring color inspiration and accuracy into everyday design workflows (as well as the New York City mayoral race). Simply by typing in a prompt, designers can generate palettes in seconds. Available in Pantone Connect, the tool uses Azure services like Microsoft Foundry, Azure AI Search, and Azure Cosmos DB to serve up the company’s vast collection of trend and color research from the color experts at the Pantone Color Institute. “Pantone bridges art and industry—turning creative concepts into reality,” says Sky Kelley, President of Pantone. “Years of research and insights are reached in seconds instead of days. Now, with Microsoft Foundry, creatives can use agents to get instant color palettes and suggestions based on human insights and trend direction.” Turning Pantone’s color legacy into an AI offering The Palette Generator accelerates the process of researching colors and helps designers find inspiration or validate some of their ideas through trend-backed research. “Pantone wants to be where our customers are,” says Rohani Jotshi, Director of Software Engineering and Data at Pantone. “As workflows become increasingly digital, we wanted to give our customers a way to find inspiration while keeping the same level of accuracy and trust they expect from Pantone.” The Palette Generator taps into thousands of articles from Pantone’s Color Insider library, as well as trend guides and physical color books in a way that preserves the company’s color standards science while streamlining the creative process. Built entirely on Microsoft Foundry, the solution uses Azure AI Search for agentic retrieval-augmented generation (RAG) and Azure OpenAI in Foundry Models to reason over the data. It quickly serves up palette options in response to questions like “Show me soft pastels for an eco-friendly line of baby clothes” or “I want to see vibrant metallics for next spring.” Over the course of two months, the Pantone team built the initial proof of concept for the Palette Generator, using GitHub Copilot to streamline the process and save over 200 hours of work across multiple sprints. This allowed Pantone’s engineers to focus on improving prompt engineering, adding new agent capabilities, and refining orchestration logic rather than writing repetitive code. Building a multi-agent architecture that accelerates creativity The Pantone team worked with Microsoft to develop the multi-agent architecture, which is made up of three connected agents. Using Microsoft Agent Framework—an open source development kit for building AI orchestration systems—it was a straightforward process to bring the agents together into one workflow. “The Microsoft team recommended Microsoft Agent Framework and when we tried it, we saw how it was extremely fast and easy to create architectural patterns,” says Kristijan Risteski, Solutions Architect at Pantone. “With Microsoft Agent Framework, we can spin up a model in five lines of code to connect our agents.” When a user types in a question, they interact with an orchestrator agent that routes prompts and coordinates the more specialized agents. Behind the scenes an additional agent retrieves contextually relevant insights from Pantone’s proprietary Color Insider dataset. Using Azure AI Search with vectorized data indexing, this agent interprets the semantics of a user’s query rather than relying solely on keywords. A third agent then applies rules from color science to assemble a balanced palette. This agent ensures the output is a color combination that meets harmony, contrast, and accessibility standards. The result is a set of Pantone-curated colors that match the emotional and aesthetic tone of the request. “All of this happens in seconds,” says Risteski. To manage conversation flow and achieve long-term data persistence, Pantone uses Azure Cosmos DB, which stores user sessions, prompts, and results. The database not only enables designers to revisit past palette explorations but also provides Pantone with valuable usage intelligence to refine the system over time. “We use Azure Cosmos DB to track inputs and outputs,” says Risteski. “That data helps us fine-tune prompts, measure engagement, and plan how we’ll train future models.” Improving accuracy and performance with Azure AI Search With Azure AI Search, the Palette Generator can understand the nuance of color language. Instead of relying solely on keyword searches that might miss the complexity of words like “vibrant” or “muted,” Pantone’s team decided to use a vectorized index for more accurate palette results. Using the built-in vectorization capability of Azure AI Search, the team converted their color knowledge base—including text-based color psychology and trend articles—into numerical embeddings. “Overall, vector search gave us better results because it could understand the intent of the prompt, not just the words,“ says Risteski. “If someone types, ‘Show me colors that feel serene and oceanic,’ the system understands intent. It finds the right references across our color psychology and trend archives and delivers them instantly.” The team also found ways to reduce latency as they evolved their proof of concept. Initially, they encountered slow inference times and performance lags when retrieving search results. By switching from GPT-4.1 to GPT-5, latency improved. And using Azure AI Search to manage ranking and filtering results helped reduce the number of calls to the large language model (LLM). “With Azure, we just get the articles, put them in a bucket, and say ‘index it now,’ says Risteski. “It takes one or two minutes—and that’s it. The results are so much better than traditional search.” Moving from inspiration to palettes faster The Palette Generator has transformed how designers and color enthusiasts interact with Pantone’s expertise. What once took weeks of research and review can now be done in seconds. “Typically, if someone wanted to develop a palette for a product launch, it might take many months of research,” says Jotshi. “Now, they can type one sentence to describe their inspiration then immediately find Pantone-backed insight and options. Human curation will still be hugely important, but a strong set of starting options can significantly accelerate the palette development process.” Expanding the palette: The next phase for Pantone’s design agent Rapidly launching the Palette Generator in beta has redefined what the Pantone engineering team thought was possible. “We’re a small development team, but with Azure we built an enterprise-grade AI system in a matter of weeks,” says Risteski. “That’s a huge win for us.” Next up, the team plans to migrate the entire orchestration layer to Azure Functions, moving to a fully scalable, serverless deployment. This will allow Pantone to run its agents more efficiently, handle variable workloads automatically, and integrate seamlessly with other Azure products such as Microsoft Foundry and Azure Cosmos DB. At the same time, Pantone plans to expand its multi-agent system to include new specialized agents, including one focused on palette harmony and another focused on trend prediction.197Views1like0CommentsRosettaFold3 Model at Ignite 2025: Extending Frontier of Biomolecular Modeling in Microsoft Foundry
Today at Microsoft Ignite 2025, we are excited to launch RosettaFold3 (RF3) on Microsoft Foundry - making a new generation of multi-molecular structure prediction models available to researchers, biotech innovators, and scientific teams worldwide. RF3 was developed by the Baker lab and DiMaio lab from the Institute for Protein Design (IPD) at the University of Washington, in collaboration with Microsoft’s AI for Good lab and other research partners. RF3 is now available in Foundry Models, offering scalable access to a new generation of biomolecular modeling capabilities. Try RF3 now in Foundry Models A new multi-molecular modeling system, now accessible in Foundry Models RF3 represents a leap forward in biomolecular structure prediction. Unlike previous generation models focused narrowly on proteins, RF3 can jointly model: Proteins (enzymes, antibodies, peptides) Nucleic acids (DNA, RNA) Small molecules/ligands Multi-chain complexes This unified modeling approach allows researchers to explore entire interaction systems—protein–ligand docking, protein–RNA assembly, protein–DNA binding, and more—in a single end-to-end workflow. Key advances in RF3 RF3 incorporates several advancements in protein and complex prediction, making it the state-of-the-art open-source model. Joint atom-level modeling across molecular types RF3 can simultaneously model all atom types across proteins, nucleic acids, and ligands—enabled by innovations in multimodal transformers and generative diffusion models. Unprecedented control: atom-level conditioning Users can provide the 3D structure of a ligand or compound, and RF3 will fold a protein around it. This atom-level conditioning unlocks: Targeted drug-design workflows Protein pocket and surface engineering Complex interaction modeling Example showing how RF3 allows conditioning on user inputs offering greater control of the model’s predictions. Broad templating support for structure-guided design RF3 allows users to guide structure prediction using: Distance constraints Geometric templates Experimental data (e.g., cryo-EM) This flexibility is limited in other models and makes RF3 ideal for hybrid computation–wet-lab workflows. Extensible foundation for scientific and industrial research RF3 can be adapted to diverse application areas—including enzyme engineering, materials science, agriculture, sustainability, and synthetic biology. Use cases RF3’s multimolecular modeling capabilities have broad applicability beyond fundamental biology. The model enables breakthroughs across medicine, materials science, sustainability, and defense—where structure-guided design directly translates into measurable innovation. Sector Illustrative Use Cases Medicine Gene therapy research: RF3 enables the design of custom proteins that bind specific DNA sequences for targeted genome repair. Materials Science Inspired by natural protein fibers such as wool and silk, IPD researchers are designing synthetic fibers with tunable mechanical properties and texture—enabling sustainable textiles and advanced materials. Sustainability RF3 supports enzyme design for plastic degradation and waste recycling, contributing to circular bioeconomy initiatives. Disease & Vaccine Development RF3-powered workflows will contribute to structure-guided vaccine design, building on IPD’s prior success with the SKYCovione COVID-19 nanoparticle vaccine developed with SK Bioscience and GSK. Crop Science and Food security Support for gene-editing technology (due to protein-DNA binding prediction capabilities) for agricultural research, design of small proteins called Anti-Microbial Peptides or Anti-Fungal peptides to fight crop diseases and tree diseases such as citrus greening. Defense & Biosecurity Enables detection and rapid countermeasure design against toxins or novel pathogens; models of this class are being studied for biosafety applications (Horvitz et al., Science, 2025). Aerospace & Extreme Environments Supports design of lightweight, self-healing, and radiation-resistant biomaterials capable of functioning under non-terrestrial conditions (e.g., high temperature, pressure, or radiation exposure). RF3 has the potential to lower the cost of exploratory modeling, raise success rates in structure-guided discovery, and expand biomolecular AI into domains that were previously limited by sparse experimental structures or difficult multimolecular interactions. Because the model and training framework are open and extensible, partners can also adapt RF3 for their own research, making it a foundation for the next generation of biomolecular AI on Microsoft Foundry. Get started today RosettaFold3 (RF3) brings advanced multimolecular modeling capabilities into Foundry Models, enabling researchers and biotech teams to run structure-guided workflows with greater flexibility and speed. Within Microsoft Foundry, you can integrate RF3 into your existing scientific processes—combining your data, templates, and downstream analysis tools in one connected environment. Start exploring the next frontier of biomolecular modeling with RosettaFold3 in the Foundry Models. You can also discover other early-stage AI innovations in Foundry Labs. If you’re attending Microsoft Ignite 2025, or watching on demand, be sure to check out our session: Session: AI Frontier in Foundry Labs: Experiment Today, Lead Tomorrow About the session: “Curious about the next wave of AI breakthroughs? Get a sneak peek into the future of AI with Azure AI Foundry Labs—your front door to experimental models, multi-agent orchestration prototypes, Agent Factory blueprints, and edge innovations. If you’re a researcher eager to test, validate, and influence what’s next in enterprise AI, this session is your launchpad. See how Labs lets you experiment fast, collaborate with innovators, and turn new ideas into real impact.”195Views0likes0CommentsMiniMax-M2: The Open-Source Innovator in Coding and Agentic Workflows Now in Azure AI Foundry
We’re thrilled to announce that MiniMax-M2, the latest breakthrough from MiniMax, is now available in Azure AI Foundry through Hugging Face. Built for developers, this model advances capabilities for what’s possible in coding, multi-turn reasoning, and agentic workflows—while delivering enhanced efficiency and scalability. What makes MiniMax-M2 different? MiniMax-M2 isn’t just another large language model—it’s a 230B-parameter Mixture of Experts (MoE) architecture that activates 10B parameters per task, ensuring better performance at a lower cost. This design enables: Enhanced efficiency: Achieve top-tier results up to 8% of the cost of comparable models. Increased context handling: With an industry-leading 204K token context window and 131K output capacity, MiniMax-M2 can process entire codebases, multi-file projects, and long-form documentation without losing coherence. Commercial ready: Released under Apache 2.0, MiniMax-M2 is open-source and ready to deploy into your workflow. The model was ranked #5 overall on the Artificial Analysis Intelligence Index, making MiniMax-M2 one of the highest-ranked open-source model globally, outperforming many proprietary systems in reasoning, coding, and language understanding. For organizations seeking high-throughput, low-latency deployments, MiniMax-M2 runs seamlessly on an 8xH100 setup using vLLM, making it both powerful and practical. The graphic above compares MiniMax-M2’s performance across multiple industry-standard benchmarks against leading models like DeepSeek-V3.2, GLM-4.6, and Gemini 2.5 Pro. While proprietary models such as GPT-5 (thinking) and Claude Sonnet 4.5 remain strong in certain areas, MiniMax-M2 delivers competitive results as an open-source solution, offering enterprise-grade performance for organizations seeking high-quality AI without compromising scalability or flexibility. Why it matters for developers MiniMax-M2 is built for modern development workflows. Whether you’re generating production-ready code, automating agentic tasks, or managing large-scale projects, this model delivers accuracy, speed, and flexibility while keeping infrastructure costs in check. Mixture of Experts Architecture: 230B total parameters, 10B active per task for cost-effective scalability. Ultra-Large Context Window: 204K tokens for comprehensive project understanding. Advanced Coding Intelligence: Optimized for code generation, debugging, multi-file editing, and test-driven development. Agentic Workflow Support: Handles complex tool integrations and multi-step problem-solving with ease. Open Source Freedom: Apache 2.0 license for commercial use. MiniMax-M2 can support finance and legal workflows by automating document-heavy tasks. In finance, it could help generate audit reports, investment summaries, and portfolio analyses by processing large datasets and regulatory guidelines in a single pass, which can improve accuracy and reduce manual effort. In legal, it could assist with case law research by summarizing extensive statutes and precedents, extracting relevant insights, and providing context-specific recommendations. With its large context window and reasoning capabilities, MiniMax-M2 can enable faster, more efficient handling of complex information, allowing professionals to focus on higher-value activities. Get started today MiniMax-M2 is now live in Azure AI Foundry, explore its capabilities and try it today.531Views0likes0CommentsBYO Thread Storage in Azure AI Foundry using Python
Build scalable, secure, and persistent multi-agent memory with your own storage backend As AI agents evolve beyond one-off interactions, persistent context becomes a critical architectural requirement. Azure AI Foundry’s latest update introduces a powerful capability — Bring Your Own (BYO) Thread Storage — enabling developers to integrate custom storage solutions for agent threads. This feature empowers enterprises to control how agent memory is stored, retrieved, and governed, aligning with compliance, scalability, and observability goals. What Is “BYO Thread Storage”? In Azure AI Foundry, a thread represents a conversation or task execution context for an AI agent. By default, thread state (messages, actions, results, metadata) is stored in Foundry’s managed storage. With BYO Thread Storage, you can now: Store threads in your own database — Azure Cosmos DB, SQL, Blob, or even a Vector DB. Apply custom retention, encryption, and access policies. Integrate with your existing data and governance frameworks. Enable cross-region disaster recovery (DR) setups seamlessly. This gives enterprises full control of data lifecycle management — a big step toward AI-first operational excellence. Architecture Overview A typical setup involves: Azure AI Foundry Agent Service — Hosts your multi-agent setup. Custom Thread Storage Backend — e.g., Azure Cosmos DB, Azure Table, or PostgreSQL. Thread Adapter — Python class implementing the Foundry storage interface. Disaster Recovery (DR) replication — Optional replication of threads to secondary region. Implementing BYO Thread Storage using Python Prerequisites First, install the necessary Python packages: pip install azure-ai-projects azure-cosmos azure-identity Setting Up the Storage Layer from azure.cosmos import CosmosClient, PartitionKey from azure.identity import DefaultAzureCredential import json from datetime import datetime class ThreadStorageManager: def __init__(self, cosmos_endpoint, database_name, container_name): credential = DefaultAzureCredential() self.client = CosmosClient(cosmos_endpoint, credential=credential) self.database = self.client.get_database_client(database_name) self.container = self.database.get_container_client(container_name) def create_thread(self, user_id, metadata=None): """Create a new conversation thread""" thread_id = f"thread_{user_id}_{datetime.utcnow().timestamp()}" thread_data = { 'id': thread_id, 'user_id': user_id, 'messages': [], 'created_at': datetime.utcnow().isoformat(), 'updated_at': datetime.utcnow().isoformat(), 'metadata': metadata or {} } self.container.create_item(body=thread_data) return thread_id def add_message(self, thread_id, role, content): """Add a message to an existing thread""" thread = self.container.read_item(item=thread_id, partition_key=thread_id) message = { 'role': role, 'content': content, 'timestamp': datetime.utcnow().isoformat() } thread['messages'].append(message) thread['updated_at'] = datetime.utcnow().isoformat() self.container.replace_item(item=thread_id, body=thread) return message def get_thread(self, thread_id): """Retrieve a complete thread""" try: return self.container.read_item(item=thread_id, partition_key=thread_id) except Exception as e: print(f"Thread not found: {e}") return None def get_thread_messages(self, thread_id): """Get all messages from a thread""" thread = self.get_thread(thread_id) return thread['messages'] if thread else [] def delete_thread(self, thread_id): """Delete a thread""" self.container.delete_item(item=thread_id, partition_key=thread_id) Integrating with Azure AI Foundry from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential class ConversationManager: def __init__(self, project_endpoint, storage_manager): self.ai_client = AIProjectClient.from_connection_string( credential=DefaultAzureCredential(), conn_str=project_endpoint ) self.storage = storage_manager def start_conversation(self, user_id, system_prompt): """Initialize a new conversation""" thread_id = self.storage.create_thread( user_id=user_id, metadata={'system_prompt': system_prompt} ) # Add system message self.storage.add_message(thread_id, 'system', system_prompt) return thread_id def send_message(self, thread_id, user_message, model_deployment): """Send a message and get AI response""" # Store user message self.storage.add_message(thread_id, 'user', user_message) # Retrieve conversation history messages = self.storage.get_thread_messages(thread_id) # Call Azure AI with conversation history response = self.ai_client.inference.get_chat_completions( model=model_deployment, messages=[ {"role": msg['role'], "content": msg['content']} for msg in messages ] ) assistant_message = response.choices[0].message.content # Store assistant response self.storage.add_message(thread_id, 'assistant', assistant_message) return assistant_message Usage Example # Initialize storage and conversation manager storage = ThreadStorageManager( cosmos_endpoint="https://your-cosmos-account.documents.azure.com:443/", database_name="conversational-ai", container_name="threads" ) conversation_mgr = ConversationManager( project_endpoint="your-project-connection-string", storage_manager=storage ) # Start a new conversation thread_id = conversation_mgr.start_conversation( user_id="user123", system_prompt="You are a helpful AI assistant." ) # Send messages response1 = conversation_mgr.send_message( thread_id=thread_id, user_message="What is machine learning?", model_deployment="gpt-4" ) print(f"AI: {response1}") response2 = conversation_mgr.send_message( thread_id=thread_id, user_message="Can you give me an example?", model_deployment="gpt-4" ) print(f"AI: {response2}") # Retrieve full conversation history history = storage.get_thread_messages(thread_id) for msg in history: print(f"{msg['role']}: {msg['content']}") Key Highlights: Threads are stored in Cosmos DB under your control. You can attach metadata such as region, owner, or compliance tags. Integrates natively with existing Azure identity and Key Vault. Disaster Recovery & Resilience When coupled with geo-replicated Cosmos DB or Azure Storage RA-GRS, your BYO thread storage becomes resilient by design: Primary writes in East US replicate to Central US. Foundry auto-detects failover and reconnects to secondary region. Threads remain available during outages — ensuring operational continuity. This aligns perfectly with the AI-First Operational Excellence architecture theme, where reliability and observability drive intelligent automation. Best Practices Area Recommendation Security Use Azure Key Vault for credentials & encryption keys. Compliance Configure data residency & retention in your own DB. Observability Log thread CRUD operations to Azure Monitor or Application Insights. Performance Use async I/O and partition keys for large workloads. DR Enable geo-redundant storage & failover tests regularly. When to Use BYO Thread Storage Scenario Why it helps Regulated industries (BFSI, Healthcare, etc.) Maintain data control & audit trails Multi-region agent deployments Support DR and data sovereignty Advanced analytics on conversation data Query threads directly from your DB Enterprise observability Unified monitoring across Foundry + Ops The Future BYO Thread Storage opens doors to advanced use cases — federated agent memory, semantic retrieval over past conversations, and dynamic workload failover across regions. For architects, this feature is a key enabler for secure, scalable, and compliant AI system design. For developers, it means more flexibility, transparency, and integration power. Summary Feature Benefit Custom thread storage Full control over data Python adapter support Easy extensibility Multi-region DR ready Business continuity Azure-native security Enterprise-grade safety Conclusion Implementing BYO thread storage in Azure AI Foundry gives you the flexibility to build AI applications that meet your specific requirements for data governance, performance, and scalability. By taking control of your storage, you can create more robust, compliant, and maintainable AI solutions.157Views4likes2Comments