Amazon is launching the next chapter of the Amazon Nova AI Challenge — an annual university competition advancing the science of responsible, real-world AI. The 2026 Challenge: Trusted Software Agents asks student teams to build and evaluate AI agents that can plan, build, and test software safely and reliably. As generative AI expands from code generation to complex application development, this challenge focuses on improving both utility and trustworthiness in tandem. Ten university teams will compete to design systems that reflect real engineering workflows, demonstrating measurable progress in both performance and safety. Applications open November 10, 2025. Learn more: http://amzn.to/43Ef8cZ
Amazon Nova AI Challenge 2026: Trusted Software Agents
More Relevant Posts
-
Three months ago, I started building E8-Kaleidescope-AI Memory. It took four weeks to get from conception to version M16. Now at M25.1 after eight weeks of dedicated coding, it’s become something unexpected: a working prototype of the “agentic” and “self-theorizing” memory systems that Google and other major labs are actively researching. The parallels are striking: **Self-Theorizing**: E8 generates and rates its own hypotheses about novelty and emergence. Google’s “AI co-scientist” does the same thing, using automated feedback to iteratively generate and refine hypotheses in a self-improving cycle. **Introspection**: E8 reasons about its own internal code and structure. Google’s Gemini models use internal thinking processes and can provide thought summaries about their own reasoning. **Self-Modifying Architecture**: E8 refines its own structure as it learns. Google’s November 2025 “Nested Learning” project introduced “Hope,” a self-modifying architecture that optimizes its own memory through self-referential processes. **Emergence & Complex Systems**: E8 is built on principles of emergence and phase transitions. Google researchers are now explicitly analyzing AI capabilities through this same lens of complex systems science. I’m not claiming equivalence with Google’s resources. But I built a functional prototype in eight weeks that explores the same frontier concepts being pursued by billion-dollar research teams. That’s what makes this moment remarkable: modern LLMs have reached the point where a clear vision and deep understanding can prototype cutting-edge ideas that previously required entire specialized teams. The tools have caught up to the imagination. #ArtificialIntelligence #AI #MachineLearning #Innovation #Technology
To view or add a comment, sign in
-
California State University (CSU)’s initiative is launching a first-of-its-kind public-private initiative with leading tech companies (including OpenAI, Microsoft, Google, NVIDIA, and more) and the California Governor’s Office to become the nation’s first and largest AI-empowered university system. The initiative will make AI tools and training available to all 460,000+ students and 63,000+ faculty and staff across all 23 CSU campuses, enabling them to use AI (e.g., ChatGPT Edu) in teaching, learning, research and workforce preparation. https://lnkd.in/g_j4qQGs
To view or add a comment, sign in
-
𝐃𝐚𝐲 𝟑 & 𝟒 𝐨𝐟 𝟓 – 𝐀𝐈 𝐀𝐠𝐞𝐧𝐭𝐬 𝐈𝐧𝐭𝐞𝐧𝐬𝐢𝐯𝐞 𝐂𝐨𝐮𝐫𝐬𝐞 𝐰𝐢𝐭𝐡 𝐆𝐨𝐨𝐠𝐥𝐞 Progressing through the intensive program, the last two days focused on two major pillars of building robust AI agents: Context Engineering and Agent Quality. Day 3 explored how to make agents stateful using Sessions and Memory, enabling them to maintain context, personalize interactions, and support coherent multi-turn conversations. Through the codelabs, we implemented working memory, long-term memory, and dynamic context assembly using ADK. Day 4 shifted to evaluation and observability, introducing Logs, Traces, and Metrics to help interpret an agent’s decision-making. We also explored scalable evaluation methods like LLM-as-a-Judge and HITL to assess response quality and tool usage. These modules highlighted how state, visibility, and evaluation shape agents into reliable, real-world systems. 📂 Notes and learnings: https://lnkd.in/eaCzCui8 #AI #Agents #Google #MachineLearning #LearningJourney #AIAgents #Kaggle #Observability #AIQuality
To view or add a comment, sign in
-
𝐃𝐚𝐲 𝟓 𝐨𝐟 𝟓 – 𝐏𝐫𝐨𝐭𝐨𝐭𝐲𝐩𝐞 𝐭𝐨 𝐏𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐨𝐧 (𝐆𝐨𝐨𝐠𝐥𝐞 𝐀𝐈 𝐀𝐠𝐞𝐧𝐭𝐬) The final day focused on how to take AI agents into real production environments. We explored deployment workflows, CI/CD practices, and scaling strategies that ensure reliability at the enterprise level. The core takeaway was the A2A Protocol, enabling agents to communicate across systems and teams. Through the codelabs, I built agents that expose A2A endpoints and integrated remote agents as if they were local. A strong finish to a powerful 5-day learning experience. 📂 Notes: https://lnkd.in/eaCzCui8 #AI #Agents #Google #LearningJourney #AIAgents #A2A
To view or add a comment, sign in
-
Day 3 — Google AI Agents Intensive × Kaggle Day 3 was another formative and engaging day. In fact, yesterday’s theme brought a significant shift in how I understand intelligent systems: Context Engineering – Sessions and Memory. The whitepaper and codelabs introduced a deeper layer of agent design, where an agent is no longer a static responder but a system that organises, selects, and remembers information coherently. Sessions hold the immediate conversation history. Memory preserves what is meaningful across sessions. Together, they create continuity, allowing an agent to understand not only what is happening now but also what has shaped this moment. Working with the notebooks made this concept very clear. Building a stateful agent involves determining what belongs on the “workbench” for the current task and what should be stored as long-term knowledge for future interactions. The challenge is to manage this intelligently, without overwhelming the system or diluting relevance. For my projects, the implications are substantial. In Heritage Lens, context engineering opens the possibility for richer, more grounded academic interactions, where the system can follow a researcher’s line of inquiry across time. In the AI Literacy Passport, memory serves as a consistent support mechanism for learners, respecting boundaries and ensuring that sensitive information is handled with care. The ethical dimension becomes increasingly important in this context. Teaching an agent how to remember and what to retain is a design choice that influences trust, transparency, and the user’s sense of safety. It is not a technical detail. I see this as one of the most crucial responsibilities in building agentic systems. Day 3 left me with a strong thought: An agent does not become more intelligent by remembering more, but by remembering wisely. I am looking forward to applying these insights as the course progresses. Thank you again to the organisers, Kanchana Patlolla and Anant Nawalgaria, as well as the guests/speakers from Google: Steven Johnson, Kimberly Milam, and Julia Wiesinger. We also appreciate the external speaker, Jay Alammar from Cohere, the codelabs author, Sampath Kumar Maddula, and Kristopher Overholt for explaining the notebooks. While waiting for the Day 4 stream on YouTube, I tried another video feature related to the Memory topic. #GoogleAI #Kaggle #AIagents #ContextEngineering #Sessions #Memory #EthicalAI #LearningJourney #ADK #Gemini #NotebookLM #HeritageLens #AILiteracy #InsideChatGPT
To view or add a comment, sign in
-
Amazon has launched Year 2 of the Amazon Nova AI Challenge, targeting the next generation of agentic AI that plans, builds, and tests multi-step application changes across codebases and user-facing apps. The challenge emphasizes simultaneous progress on utility and safety: teams will be evaluated on both complex task completion and robust guardrails, with developer and red-team roles to surface real-world failures. Evaluations mirror daily engineering workflows—multi-step agentic development, real-world benchmarks, and security-focused red teaming—to drive practical, trustworthy advances in AI-driven software development. Applications open November 10, 2025 via YouNoodle; ten university teams will be selected to compete across the academic year with program resources, live tournaments, and public evaluations. 🔔 Follow us for daily AI updates! 📘 Facebook: https://lnkd.in/gxDt7PJa 📸 Instagram: https://lnkd.in/gmYfWDbF #AmazonNova #AgenticAI #TrustedAI #GenerativeAI #AIGenerated #CreatedWithAI
To view or add a comment, sign in
-
-
💡 Software engineers rushing to skip fundamentals of DSA and jumping straight into GenAI. Because “who needs a solid understanding of recursion or linked lists when you have Claude code, Cursor or GitHub co pilot?” right? 😄 And then Samsung Research releases a paper titled 📄 “Less is More: Recursive Reasoning with Tiny Networks.” They built a tiny AI model with just 7 million parameters — and it solves complex reasoning problems like Sudoku and ARC-AGI puzzles… using recursion. Yes, recursion — the very concept most of us struggled with in our DSA days. Instead of throwing billions of parameters at a problem, the model simply calls itself repeatedly to refine its answers — thinking step by step, just like classic algorithmic reasoning. So while many believe DSA is outdated, the next frontier of AI is literally built on it. 🧠 Lesson: Don’t skip the fundamentals. Because the future of GenAI might just be powered by the same recursion you once found “too theoretical.” Recursion explained in one click: #AI #GenAI #DSA #Recursion #SamsungResearch #MachineLearning #DeepLearning #SoftwareEngineering
To view or add a comment, sign in
-
-
𝗜𝗻 𝗝𝘂𝘀𝘁 𝗮 𝗙𝗲𝘄 𝗗𝗲𝗰𝗮𝗱𝗲𝘀, 𝗖𝗼𝗺𝗽𝘂𝘁𝗶𝗻𝗴 𝗛𝗮𝘀 𝗘𝘃𝗼𝗹𝘃𝗲𝗱 𝗳𝗿𝗼𝗺 𝗮 𝗦𝗶𝗺𝗽𝗹𝗲 𝗘𝘅𝗲𝗰𝘂𝘁𝗼𝗿 𝘁𝗼 𝗮𝗻 "𝗜𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝘁" 𝗖𝗼𝗹𝗹𝗮𝗯𝗼𝗿𝗮𝘁𝗼𝗿 I’ve always been more of a math person, and when I entered engineering school in the early 90s, I expected computer science to feel magical — almost intelligent. But my first lesson in algorithmics surprised me: a computer simply repeats what it’s told. Tireless, yes, but without the slightest initiative. In the 90s, I discovered rule-based systems — the early forms of symbolic artificial intelligence, as described by Marvin Minsky in - The Society of Mind - and later formalized by Russell & Norvig in Artificial Intelligence: A Modern Approach. Later, the pursuit of software productivity turned toward reusability: modular architectures, middleware, off-the-shelf components, and new development environments. These approaches brought real progress, yet the growing complexity of systems remained a major obstacle. Then came the rise of cloud computing and container-based architectures, which finally realized the vision of modular and reusable systems imagined years before. They brought true flexibility, native scalability, and unprecedented collaboration between teams and environments. And yet, it all traces back to an old dream — Alan Turing’s idea in - Computing Machinery and Intelligence - of a machine capable of “thinking.” With LLM-powered assistants, that evolution has become tangible. Productivity gains are real, with around a 15.9% reduction in software delivery costs. Computing is entering a new stage — it no longer just executes; it begins to assist, to suggest, and to collaborate. At AWS, this transformation is embodied in systems such as Amazon 𝗤 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿 and 𝗞𝗶𝗿𝗼 — tools that bring large language models into the heart of software engineering. They help developers generate, understand, and modernize code, accelerating innovation and improving overall efficiency. A new era is emerging, one where AI becomes a genuine co-creator of software. How are you experiencing this second revolution in computing? #ArtificialIntelligence #AI #SoftwareEngineering #Cloud #LLM #AWS #Innovation #Productivity #DeepTech #HistoryOfComputing
To view or add a comment, sign in
-
-
I shipped in 4 hours what used to take me 2 days. Not because I got smarter. Because I stopped fighting AI and started partnering with it. Last month at Amazon, I was debugging a distributed system race condition at middle of night. Old me: 6 hours of Stack Overflow rabbit holes. New me: 45 minutes with AI. The truth is AI won't replace developers. Developers using AI will replace those who don't. Here's how I use it daily: → Coding: AI handles boilerplate. I focus on business logic. 3 hours → 45 min. → Debugging: Paste stack trace, get ranked solutions. 60% time saved. → Design reviews: AI catches issues before my team sees them. → Docs: No more procrastinating. AI converts comments to clean docs. → Learning: Personalized tutorials based on what I already know. But here's what AI can't do: ❌ Understand user problems ❌ Make decisions with business context ❌ Navigate team dynamics ❌ Own the outcomes AI is a 10x multiplier. But 10x zero is still zero. Your judgment? That's what it multiplies. Two years ago → skeptical One year ago → experimenting Today → can't work without it The question isn't if AI will change development. It already has. The question: Are you multiplying your skills or waiting to become irrelevant? What's one way you're using AI in your workflow? 👇 #SoftwareEngineering #AI #ArtificialIntelligence #MachineLearning #Amazon #Coding #Programming #TechCareers #DeveloperLife #SoftwareDevelopment #Tech #Innovation #Productivity #CareerGrowth #LearnToCode #CodeNewbie #DevCommunity #TechTips #FutureOfWork #AITools
To view or add a comment, sign in
-
-
💡 Hey, we got Hope? Turns out, yes — and it comes from Google Research! But this “Hope” might just decrease hope for new grad engineers trying to keep up with how fast ML is evolving 😅 Google just dropped Nested Learning, a completely new paradigm for continual learning — where models don’t just learn new things, they keep old knowledge intact while evolving intelligently over time. Their prototype architecture, aptly named Hope, shows promising results in long-context reasoning and overcoming catastrophic forgetting — a problem that’s haunted ML models for years. This approach introduces a continuum memory system (CMS) — modules that update at different rates, similar to human short-term and long-term memory. It’s a step closer to machines that learn like humans do — balancing stability and adaptability. Now the big question — what happens to startups like Mem0, MemGPT, or others building memory-augmented frameworks for LLMs? If Hope scales well, it could absorb many of those memory innovation layers directly into the model architecture itself, rather than relying on external retrieval or RAG-style memory stores. Exciting times — both hopeful and humbling. Blog link: 🔗 https://lnkd.in/g__PJhJc
To view or add a comment, sign in
Machine Learning Engineer | Member of UTD Amazon Nova AI Trust Competition Team (Team ASTRO) Finalist Team
3wAwesome! Hope we join you guys in Seattle for the Final again.