𝐃𝐚𝐲 𝟓 𝐨𝐟 𝟓 – 𝐏𝐫𝐨𝐭𝐨𝐭𝐲𝐩𝐞 𝐭𝐨 𝐏𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐨𝐧 (𝐆𝐨𝐨𝐠𝐥𝐞 𝐀𝐈 𝐀𝐠𝐞𝐧𝐭𝐬) The final day focused on how to take AI agents into real production environments. We explored deployment workflows, CI/CD practices, and scaling strategies that ensure reliability at the enterprise level. The core takeaway was the A2A Protocol, enabling agents to communicate across systems and teams. Through the codelabs, I built agents that expose A2A endpoints and integrated remote agents as if they were local. A strong finish to a powerful 5-day learning experience. 📂 Notes: https://lnkd.in/eaCzCui8 #AI #Agents #Google #LearningJourney #AIAgents #A2A
Deploying AI Agents with A2A Protocol: Day 5
More Relevant Posts
-
𝐃𝐚𝐲 𝟑 & 𝟒 𝐨𝐟 𝟓 – 𝐀𝐈 𝐀𝐠𝐞𝐧𝐭𝐬 𝐈𝐧𝐭𝐞𝐧𝐬𝐢𝐯𝐞 𝐂𝐨𝐮𝐫𝐬𝐞 𝐰𝐢𝐭𝐡 𝐆𝐨𝐨𝐠𝐥𝐞 Progressing through the intensive program, the last two days focused on two major pillars of building robust AI agents: Context Engineering and Agent Quality. Day 3 explored how to make agents stateful using Sessions and Memory, enabling them to maintain context, personalize interactions, and support coherent multi-turn conversations. Through the codelabs, we implemented working memory, long-term memory, and dynamic context assembly using ADK. Day 4 shifted to evaluation and observability, introducing Logs, Traces, and Metrics to help interpret an agent’s decision-making. We also explored scalable evaluation methods like LLM-as-a-Judge and HITL to assess response quality and tool usage. These modules highlighted how state, visibility, and evaluation shape agents into reliable, real-world systems. 📂 Notes and learnings: https://lnkd.in/eaCzCui8 #AI #Agents #Google #MachineLearning #LearningJourney #AIAgents #Kaggle #Observability #AIQuality
To view or add a comment, sign in
-
🚨 The AI race just moved into your terminal. 💻 Claude Code started it — and now every major AI company is following. But why are they all building CLIs instead of browser apps? The answer reveals where AI is truly headed: portable AI engines that can run anywhere. Here’s what’s really happening 👇 1️⃣ Claude Code’s Big Shift — It turned your terminal into an AI workspace, giving agents full access to local files and tools. 2️⃣ Why CLIs Matter — They make AI portable, deployable across any device or server — not just your PC. 3️⃣ Rise of Bash Agents — AI models are now using system tools like bash and git to execute commands and interact with local environments. 4️⃣ From Local to Cloud — Claude agents can run on GitHub workflows, CI/CD pipelines, and even as remote MCP servers — real autonomous infrastructure. 5️⃣ The Future — Anthropic’s portable agent SDKs mean your AI won’t live in apps anymore — it’ll run your systems. The CLI war has begun, and Claude Code is leading the charge. Watch the full breakdown here 🎥👇 🔗 https://lnkd.in/damZf-yJ 💬 Join our Discord to explore AI tools, build agents, and connect with other builders: 👉 https://lnkd.in/duyM6RNg
To view or add a comment, sign in
-
You're starting your Generative AI POC project. And already thinking about RAG, vector databases, embedding updates, data refreshes, evaluation metrics... 🧠 But pause for a second, is your dataset really that big? Are those problems part of your initial goal, or just distractions? 👉 Start simple. Drop a headless CLI agent like Claude Code or Codex in a folder with your exported knowledge base and let it search for answers the same way it finds functions in code. No fancy stack. No infra. Just a real baseline to learn from. 🚀 #DevOps #PlatformEngineering #GenerativeAI
To view or add a comment, sign in
-
Day 3 — Google 5-Day AI Agents Intensive Today’s focus was on Context Engineering, the discipline that enables agents to think, remember, and learn across conversations — transforming them from stateless responders into truly stateful and adaptive systems. 🧩 Context Engineering Context Engineering is the process of dynamically assembling and managing information inside an LLM’s context window to maintain continuity and intelligence over time. It relies on two core pillars: 1. Sessions (Short-Term Memory) The workspace for a continuous conversation — storing dialogue history and working state. Key challenges include: Context rot due to excessive history Compaction via token-based truncation and recursive summarization Persistence and isolation for secure, multi-user environments 2. Memory (Long-Term Knowledge) A persistent knowledge base that lets the agent “remember” facts, preferences, and context across sessions. Unlike RAG (expert on the world), memory makes the agent an expert on the user. Memory Lifecycle: Extraction: Distilling signal from conversation history Consolidation: Merging, resolving conflicts, and pruning stale data Retrieval: Injecting memories by relevance, recency, and importance Security: Tracking provenance and preventing memory poisoning 🧪 Hands-On Labs (3A) — Sessions Practiced the fundamentals of building stateful agents: ✅ Implemented session services to preserve dialogue history ✅ Used context compaction for long interactions ✅ Persisted sessions in databases ✅ Managed structured state across conversations ✅ Explored production concerns like latency, isolation, and persistence 🧠 Hands-On Labs (3B) — Memory Extended session context with long-term memory using ADK’s MemoryService: ✅ Added memory storage and retrieval mechanisms ✅ Transferred session data to persistent memory ✅ Built search and recall functions for past knowledge ✅ Automated consolidation and proactive memory loading ✅ Compared reactive (on-demand) vs proactive (always-loaded) strategies This integration bridges the gap between short-term interaction and long-term intelligence. 💡 Key Insight Sessions make an agent responsive. Memory makes it self-aware. Together, they enable continuity, personalization, and truly intelligent behavior. Excited for Day 4, where we’ll explore advanced orchestration and control strategies for multi-agent systems. #AIAgents #Google #Gemini #ADK #AgentOps #ContextEngineering #Memory #LLM #MachineLearning #AIEngineering #Day3 #Kaggle
To view or add a comment, sign in
-
-
🤖 Day 2 – 5-Day AI Agents Intensive with Google by Kaggle Yesterday's focus was on one of the most powerful aspects of intelligent systems — Agent Tools and Interoperability using the Model Context Protocol (MCP). AI agents become truly capable when they can perform actions beyond their training data — and that’s exactly what today’s lessons explored. 💡 Key Takeaways: > Learned how external tool functions extend an agent’s abilities — enabling it to access real-time data and perform specialized operations. > Explored Model Context Protocol (MCP) — its architecture, communication layers, and how it enables secure, interoperable agent ecosystems. > Understood enterprise readiness gaps and risks when scaling MCP-based systems. > Discovered best practices for tool design, ensuring safety, clarity, and efficiency in multi-agent environments. 💻 Hands-on Codelabs: > Created custom tools by turning my own Python functions into agent actions 🧩 > Implemented long-running operations — where agents can pause tool calls and resume after human approval ✅ > Practiced safe tool orchestration using MCP for real-world interoperability 🎧 Wrapped up with an insightful NotebookLM podcast and whitepaper, bridging theory with practice. The combination of Gemini, ADK, and MCP is truly transforming how AI agents think, act, and collaborate. Can’t wait for Day 3, where we’ll explore deeper orchestration and evaluation! #GoogleAI #Kaggle #AIagents #MCP #Gemini #AgentDevelopmentKit #MachineLearning #ArtificialIntelligence #AItools #Interoperability #AIAgentsIntensive #KaggleMentor3 #LearningJourney
To view or add a comment, sign in
-
Nearly 8 in 10 companies test generative AI and less than 10% make it past pilots. The problem isn’t the tech, it’s missing structure. The Canvas Framework explained in this MongoDB blog offers a five-phase method to build agents ready for production, not just demos. https://lnkd.in/eaGmfzuZ
To view or add a comment, sign in
-
-
🧠 An Agent That Can't Remember is Just a Tool! Day 3 of the 5-Day AI Agents Intensive Course with Google was a deep dive into what makes an agent truly intelligent: Context Engineering: Sessions & Memory. This is what elevates an agent from a one-shot tool to a stateful, collaborative partner. The expert panel was incredible, providing a 360-degree view of memory, from foundational architecture to practical application. Key Insights from the Experts: • Context is Everything: I learned from Jay Alammar how crucial "context engineering" is. It's not just about bigger windows, but strategically assembling information (instructions, history, retrieved data) to steer the agent's reasoning effectively. • Foundational Architecture: Julia Wiesinger, helped frame the discussion by explaining how memory and retrieval-based learning are core parts of the agent's orchestration layer, enabling them to act beyond their static training. • Building Agent Memory: Kimberly Milam, provided a fantastic technical breakdown of how this is built in practice. We explored the Memory ETL Pipeline and the vital difference between volatile Sessions and persistent Memory (long-term, across-session recall). • Augmenting Human Memory: Steven Johnson offered a powerful perspective on the why. He showed how tools like NotebookLM use these memory concepts to augment human creativity and research, turning AI into a true partner for thought. 🛠️ Hands-On Codelabs: Putting this to work was the best part. The codelabs walked us through: • Building a Stateful Agent: Managing conversational history in a session. • Giving an Agent Long-Term Memory: Implementing a memory system that persists after the conversation ends. A huge thanks to Maddula Sampath Kumar for the excellent codelab review, which helped solidify these complex, hands-on concepts. This was a pivotal day. An agent with memory can learn, personalize, and build on past interactions. #AIAgents #GenAI #ContextEngineering #Memory #Kaggle #Google #LLMs Tagging the Hosts: Kanchana Patlolla, Anant Nawalgaria
To view or add a comment, sign in
-
-
Day 5 of the Google × Kaggle “5 Days of AI Agents” Intensive Course! Prototype → Production The final day focused on what it really takes to bring agentic systems from experimentation to reliable production. The “Prototype to Production” whitepaper highlighted how observability, safety, reliability, and continuous evaluation become non-negotiable when deploying agents at scale. Core insights from today: Moving from playground-level agents to production-grade systems requires strong guardrails, runtime checks, and clear capability boundaries. Evaluation isn’t one-time production agents need ongoing monitoring, tracing, and feedback loops. Importance of sandboxing, rate limiting, escalation paths, fallback policies, and clear failure modes. The production lifecycle of an agent mirrors software engineering: design → build → test → observe → iterate. The final Q&A panel featuring Kanchana Patlolla, Anant Nawalgaria, Elia Secchi, Dr. Sokratis Kartakis, Saurabh Tiwary, and Will Grannis brought industry perspective on deploying agents responsibly, keeping safety at the forefront, and designing for long-term reliability. A special mention to Laxmi Harikumar the hands-on notebooks throughout this week have been outstanding. Clear, practical, and incredibly well-delivered. It’s been an amazing learning journey across 5 days from prompts → tools → workflows → memory → quality → production. Excited to apply these concepts in real projects and looking forward to the capstone Project! #AI #Kaggle #GoogleAI #AgenticAI #LLM #AutonomousAgents #MachineLearning #ProductionAI
To view or add a comment, sign in
-
-
Sally O'Malley explains the unique observability challenges of LLMs and provides a reproducible, open-source stack for monitoring AI workloads. She demonstrates deploying Prometheus, Grafana, OpenTelemetry, and Tempo with vLLM and Llama Stack on Kubernetes. Learn to monitor critical cost, performance, and quality signals for business-critical AI applications. https://lnkd.in/gTAiMKb9
To view or add a comment, sign in
-
Google is releasing a 5-Day AI Agents course on Kaggle. Their last course had over 420,000 learners. This one covers: > Agents and their architectures > Tools & MCP integration > Context Engineering > Evaluating the quality of agents > Prototype to Production The course goes live 10-14 November. You can register here: https://lnkd.in/dszJ9gfT
To view or add a comment, sign in
-