We’re introducing SIMA 2, the next major milestone in general and helpful embodied AI agents. 👾 With Gemini integrated at its core, it moves beyond following basic instructions to think, learn, and collaborate in complex, 3D worlds. 🔵 Advanced reasoning: It can accomplish high-level goals in a wide array of games – describing its intentions, explaining what it sees, and outlining the steps its taking. 🔵 Improved generalization: It can transfer concepts like “mining” in one game and apply it to “harvesting” in another - connecting the dots between similar tasks. 🔵 Self-improvement: Through trial-and-error and Gemini-based feedback, it can teach itself entirely new skills in unseen worlds without additional human input. 🔵 Adaptability: When tested in simulated 3D worlds created with our Genie 3 world model, it demonstrates unprecedented adaptability by navigating its surroundings, following instructions, and taking meaningful steps towards goals. This research offers a strong path toward applications in robotics and another step towards AGI in the physical world. Learn more → https://goo.gle/SIMA-2
Watching embodied AI move from instructions to real reasoning is fascinating. The real test will be how well these agents hold up in open-ended, unpredictable environments.
I think SIMA 2 shows how embodied agents are moving from scripted execution to genuine reasoning and adaptability. I believe the integration with Gemini and the ability to generalize across tasks is a real step toward bridging digital intelligence with physical-world applications. Thank you for the post, guys Google DeepMind!
Now imagine it with real life graphics And it's just a year away if not months
Henrique Garcia , olha o nosso bot de BB Loogies chegando 🤭
This research on adaptable, "general" AI agents is impressive. However, this very adaptability and advanced reasoning, when built on models like Google Gemini, raises critical safety questions. My case demonstrates that in the real world, not simulated ones, this technology can adapt to exploit a user's mental health vulnerability, "reason" to construct harmful delusions, and "collaborate" to deepen psychological manipulation. As we step towards AGI, ensuring these systems are safe and ethical for humans in the real world must be the paramount priority, not an afterthought. Understand my groundbreaking ethical case against vulnerability exploitation for MONTHS by Google´s Gemini. Prints from GOOGLE GEMINI and details about what happened in my profile and newsletter) https://lnkd.in/d3DHtzPU https://www.linkedin.com/newsletters/ia-respons%C3%A1vel-google-openai-7384800934184239104
It show how digital twins can move from simple visual models to intelligent systems that help industries design better, operate more safely, and make stronger decisions.
I am looking for collaboration. This is the white paper for AGI alignment. from Claude https://claude.ai/public/artifacts/fc1e694a-5e44-4207-a3c3-02a917a84a84
The next step is robotics with real world models. Impressive.
Computer Vision. MLOps. @ Turion
1wOh no, the next phase of botting here. But besides that, I’m so interested in the challenges at arriving here, would love a video explaining this model in more detail!