We're pleased to announce that Nimbus has been selected for AWS's elite Building with Llama program, one of 30 companies chosen from over 1,000 applicants!
This program provides access to advanced AI models from Meta, running on AWS infrastructure, to accelerate our drug discovery efforts. We’re fine-tuning these tools to enhance our chemistry-aware AI infrastructure to help our scientists explore chemical space more efficiently, including predicting molecular properties, assessing drug-like characteristics, evaluating synthetic routes, and analyzing compound-target interactions.
Our thanks to AWS Startups, Meta for Developers, and Azadeh Yazdan for this opportunity, and congratulations to our fellow program participants.
Well done Daniel Price, Leela Sriram Dodda, James Carter, Matthew Medina & the whole team! Pushing drug discovery bounds further to impact the medicines we design 💪🏼🙏🏼!
Exciting to see this shift towards SLMs. You can’t beat the latency of small models.
Here’s something we’ve found works really well especially for creative tasks like debugging and coding.
Let an LLM on high-reasoning create a highly detailed plan that even an intern could implement. Do a design review, ask for detailed code changes and iterate on the plan until satisfied. Then let an SLM like Code Fast, Nano or Haiku run wild.
It’s by far the best user experience balancing speed and performnace.
Sure, the SLM might slip up and hallucinate a few tokens. Ask the LLM to debug and get SLM to implement. Rinse and repeat.
SLMs are only going to grow in popularity. This paper (see comments) makes a pretty good case.
Devices like DGX Spark are already providing 1T petaFLOPs of compute which is sufficient to run a handful of SLMs on the edge.
Projects like Transformers.js are only going to grow in popularity as agents get integrated deeper into our browsing experience.
Partner at Menlo Ventures | Investing in AI startups!
This Stanford professor just raised a $50M Seed and have built a 10x faster and 10x cheaper AI coding model with the performance of Gemini Flash / Haiku.
Inception’s Mercury model can implement games like Connect 4 from scratch in ~2s. The speed feels magical, like going from dial up to broadband. And it’s purely achieved by the novel use of diffusion models for code, that Stefano and two PhD student turned professor cofounders (Aditya and Volodymyr) invented years ago.
Maybe there’s a world where we DONT need billions in compute when you can run quality models cheaper ($0.25/M input, $1/M output tokens). The next step is to get them to be at frontier quality.
AI veterans Andrew Ng, senpai Andrej Karpathy amd Eric Schmidt (Innovation Endeavors)are also investing alongside us at Menlo Ventures as well as NVIDIA, Microsoft, Databricks and Snowflake.
A small testament to just how incredible this team is the long list of incredible startups have already come out of Stefano’s lab: founders and key members of SSI, Pika, Luma, Together, Harmonic, Wispr, Liquid, Radical Numerics and many more!
Technical blog: https://lnkd.in/gtggzFFr
TechCrunch:
https://lnkd.in/gCzd6qT3
From unlocking physical compute bottlenecks to applying frontier models to solve humanity’s largest challenges, the hardest conversations in AI need space to breathe. So a few weeks ago, BlueYard gathered founders, researchers, and builders on mountain trails in the Bavarian Alps for two days of deep discussion about the technology reshaping our world.
Climbing Hard AI Peaks was designed around four “camps.”
We started at Basecamp with conversations about physical foundations: chips, datacenters, energy, infrastructure.
At Camp Alpha, we got into platform choices: what models to build on, dependencies, trade-offs.
Camp Beta was about the hard middle: turning capability into actual product, GTM, defensibility.
And at the Summit, we talked about what human-AI coexistence actually looks like.
Thanks to our founders working on some of the hardest AI challenges (e.g. Corintis. Manex AI, Foresight Data Machines, Inpho, Rapidata, Transcripta Bio & others) and everyone who joined including special guests from Anthropic, OpenAI, Google DeepMind, Google, and the extended BlueYard portfolio family. Special appreciation to our discussion leaders who brought both expertise and curiosity to every stop along the trail.
Create AI agents that don’t just think, they act.
At the Amazon Web Services (AWS) GenAI Hackathon in NYC, builders will design agents that ingest information, turn it into context, and execute end-to-end workflows.
A full day of hands-on building, tool chaining, and collaboration with others exploring what’s possible with LLMs.
Over $7.5K in prizes.
If you’re in NYC and building with AI, you’ll want to be there.
Startups Solutions Architecture at Amazon Web Services (AWS)
🚀 Join the AWS Generative AI Hackathon in NYC!
An event designed to bring together innovative developers and foster deeper community connections within the NYC tech and AI ecosystem.
📅 Event Details:
Date: November 14, 2025 Time: 9:00 AM - 6:00 PM EST
Location: JFK27 Bryant Park office, NYC
🎖️ Prizes: Over $7.5K in cash prizes from our sponsors, including AWS, Deepgram, island.io, Coder, Thoropass, Spectro Cloud, Vanta, Merge, and Dagster Labs.
🤖 Challenge: Build cutting-edge AI agents or workflows that leverage Large Language Models (LLMs) and advanced AI applications to solve real-world challenges. Use tool chaining or rule-based flows to deliver high-impact solutions.
📢 Opportunity: Connect with fellow AI engineers, learn from industry experts, and gain valuable exposure for your work. Whether you’re part of a team or going solo, this is your chance to shape the future of AI agents!
🔗 Register Now: https://lnkd.in/e4Bc2Az6
Don’t miss out on this chance to win big and be a part of the NYC tech and AI ecosystem. We look forward to seeing your innovative solutions at the hackathon!
#AWS#GenerativeAI#Hackathon#NYC#AI#TechCommunity#Innovation#deepgram#vanta#thoropass#spectrocloud#merge#dagster#islandio#coder
Combining Skills and Experience to Build Meaningful Solutions.
Aditi Deodhar, MS’25, information systems, through her academic journey at Northeastern, has been able to grow her skill set and experiences through impactful projects, an incredible co-op opportunity and participating in vigorous hackathons.
Deodhar secured her co-op at Jutly Inc., an early-stage startup in Cambridge, as an AI engineer. During her time at Jutly, she focused on exploring and integrating emerging AI orchestration technologies such as LangGraph and LangChain. She built prototypes demonstrating how these tools could enhance the company’s AI-driven simulations and workflows. Her role also involved collaborating with teammates to design and optimize workflows, as well as documenting and sharing her findings. The experience provided her with both technical depth and the opportunity to contribute to the team’s collective learning.
At the Confluent AI Day 2025 Hackathon, Deodhar and her team earned 2nd place overall with their project “SecureStream AI”: a real-time streaming app that detects sensitive data (PII/PHI), assesses privacy risks and recommends sanitization before information reaches AI models. Built using Confluent Cloud, Kafka, Flink and MongoDB, the solution transformed live data into actionable, secure insights.
Deodhar aspires to continue working at the intersection of AI and systems, particularly in areas like healthcare, education, and social impact, where technology can truly make a difference.
Read the full article here: https://lnkd.in/eeKr5u92#neu#InformationSystems#Spotlight#NortheasternUniversity#MGEN#Boston#Huskies
⚡ Tensormesh emerges from stealth with $4.5M seed funding to optimize #AI infrastructure
Tensormesh, a San Francisco-based AI infrastructure optimization company, has raised $4.5M in seed funding led by Laude Ventures to expand operations and development efforts.
Built on years of academic research in distributed systems and AI infrastructure, Tensormesh enables fast inference while maintaining full control over data and deployment. Its software captures and reuses intermediate data that other systems discard, delivering significant performance gains on infrastructure that customers own and control.
The company is led by Junchen Jiang, University of Chicago faculty member and co-creator of LMCache, a leading open-source KV caching project with 5K+ GitHub stars and 100+ contributors. LMCache is integrated with frameworks such as vLLM and NVIDIA Dynamo, and has been used by Bloomberg, Red Hat, Redis, Tencent, GMI Cloud, and WEKA.
Team-
Yihua Cheng - CTO
Kuntai Du - Chief Scientist
Ion Stoica - Advisor
Hui Zhang - Advisor
#AI#Infrastructure#StartupFunding#DistributedSystems#MachineLearning#Tensormesh#AIOptimization#TechInnovation#OpenSource
Yesterday we presented our Final Project at the Google & Reichman Tech School GenAI program!
It was an exciting milestone — the result of weeks of hard work, research, and incredible conversations with people from this amazing LinkedIn community.
Together with my partner Yonatan Dubovi Dubovi, we built Jenny — a GenAI application designed for VCs and angel investors. It’s built entirely on what we learned throughout the course, and we’re proud of how far we’ve come.
We couldn’t have done it without the guidance of some truly exceptional people:
🔹 Ori Shapira – One of the most professional people I’ve ever met. In just a 10-minute conversation, you helped us sharpen our direction and thinking. Your depth of knowledge is unbelievable – thank you!
🔹 Yuval Belfer – Yuval doesn’t just teach GenAI – he rewires the way you think about it.
Learning from him turns complex theory into actionable insight. Thank you for inspiring us to think like builders and not just users!
My tip:
When you learn from people who actually create LLMs, you start to understand the real difference between training and just prompting. In such a fast-moving field, that kind of understanding is everything.
And a special thank you Gal Cohen – your updates, messages, and check-ins made a huge difference!
This technology is transforming how VCs and angels work.
The only question is: which side of the change will you be on?
JOIN THE REVOLUTION
DM me if you're curious!
Miami’s startup and tech scene is booming, and Northeastern University Miami students are right in the middle of it.
See how partnerships with companies like Analytic Partners are creating real career pathways in analytics, AI, and tech at https://bit.ly/3Le9Nmd.
🌍 𝐍𝐞𝐱𝐭 𝐇𝐚𝐜𝐤𝐚𝐭𝐡𝐨𝐧 𝐢𝐬 𝐰𝐡𝐞𝐫𝐞?
At [Cold Start:] Distributed AI Hack Berlin, we’re exploring a different path. One where models learn collaboratively without sharing raw data.
📅 On Nov 14 & 15, top developers, researchers, and innovators will come together to experiment with federated and decentralized learning, powered by:
🔹 exalsius cloud infrastructure management
🔹 Flower Labs federated learning framework (Flower)
🔹 AMD Developer Cloud
Tracks:
1️⃣ Federated Learning for Healthcare: advancing privacy-preserving ML
2️⃣ Open Industry Challenge: solving sourced problems from real industry datasets
With €5000 in prizes and support from leading partners including Technische Universität Berlin, the Einstein Center Digital Future, AI NATION, Science & Startups, and Merantix AI Campus.
#federatedlearning#AI#hackathon
Alembic Technologies just dropped a Series B that feels like a tectonic adjustment. $145M led by Prysm Capital and Accenture at a $645M valuation isn’t luck, it’s the market admitting someone solved a problem everyone pretended was fine. Marketing spent years drowning in correlation theater, and Alembic showed up with causal AI sharp enough to cut through the noise. Tomas Puig built a platform that treats data like a master distiller treats raw ingredients, turning chaos into clarity that actually ties spend to revenue instead of wishful thinking.
The founding trio sets the tone. Tomás Puig brings NASA beginnings, global creative wins, and multiple exits, a career arc that sounds like someone dared him to collect experiences. John A. built Twitter’s first security team when the platform was still the internet’s unruliest classroom. Seth Little adds the creative discipline that keeps the tech from feeling like a spreadsheet in costume. Investors like Prysm Capital, Accenture, WndrCo, Liquid 2 Ventures, NextEquity Partners, Friends & Family Capital, and SLW didn’t wander in. They saw the 15.7x valuation jump from Series A and knew they were backing gravity.
Here’s the lesson: help execs crack a question they lose sleep over, and adoption’s inevitable. Delta Air Lines didn’t want a prettier chart, it wanted the real impact of its Paris Olympics sponsorship, and Alembic tied it to $30M+ in cash sales. NVIDIA didn’t want attribution poetry; it wanted predictive clarity, and the platform delivered accuracy finance teams trust. When a C-suite gets answers rooted in cause and effect instead of statistical improv, the whole org moves differently.
Most startups brag about cloud credits. Alembic’s building its second private NVIDIA NVL72 SuperPOD with Equinix. That compute is a commitment to owning the causal AI stack, from spiking neural networks to composite AI pulling signals from TV, radio, #webanalytics, and foot traffic. It shows enterprise clients that #performance, #privacy, and uptime aren’t negotiable. That’s how you earn #Fortune200 and #Fortune500 trust with NVIDIA, Delta, BlackRock, Elevance Health, Mars, Texas A&M University, and North Sails.
The leadership bench seals it. Lloyd Taylor runs ops and security with Google and LinkedIn DNA. Abby Kearns brings 20+ years of enterprise software depth. Gregory Kennedy and Daniel McKnight pushed Alembic into boardrooms where budgets decide outcomes. With Justin Wexler and Laura Maness on the board, the company’s entering a scale phase that feels engineered, not improvised.
If you’re a CMO tired of defending budgets with hope, or a CEO still guessing where money goes, Alembic is the signal you can’t ignore. Causality’s operational, and the companies embracing it will own the next chapter.
#Startups#StartupFunding#VentureCapital#SeriesB#AI#Marketing#MarketingTech#MarTech#Analytics#Data#Enterprise#SaaS#Technology#Innovation#TechEcosystem#StartupEcosystem
Chief Executive Officer at Nimbus Therapeutics | Board Member
4wWell done Daniel Price, Leela Sriram Dodda, James Carter, Matthew Medina & the whole team! Pushing drug discovery bounds further to impact the medicines we design 💪🏼🙏🏼!