OpenAI for Startups’ cover photo
OpenAI for Startups

OpenAI for Startups

Technology, Information and Internet

AI for ambitious builders.

About us

Website
https://openai.com/startups/
Industry
Technology, Information and Internet

Updates

  • OpenAI for Startups reposted this

    I’m leading an OpenAI Build Hour: Agent Memory Patterns! 📅 Dec 3 • 10–11 AM PST 🔗 Register: https://lnkd.in/guSX5pd2 AI agents don’t just reason, they remember. Join me to explore the memory patterns that create the 'magic moment' where agents feel personal, persistent, and genuinely helpful. We’ll cover: - Why memory matters — stability, personalization, and long-running workflows - Short-term memory patterns — sessions, trimming, compaction, summarization - Long-term memory patterns — state objects, structured notes, memory-as-a-tool - Architectures — token-aware sessions, memory injection, guardrails, memory triggers - Live demo — building an end-to-end agent with dynamic short + long-term memory - Best practices — avoiding context poisoning, burst, noise, and conflict I’ll be joined by my amazing colleague Brian Fioca for live Q&A. Come build the next generation of agentic systems with us! 🚀

    • No alternative text description for this image
  • GPT-5.1-Codex-Max, our most advanced agentic coding model to date, is now available in Codex. API availability is coming soon. It’s faster, more capable, and more token-efficient, with native long-running task support via a new built-in capability called compaction and a new extra-high reasoning effort option. The first Codex model trained in Windows environments, it now replaces GPT-5.1-Codex as the default model in Codex surfaces. Major speed + token efficiency gains: ⚡️️Matches GPT-5.1-Codex performance on SWE-Bench Verified with ~30% fewer thinking tokens ️⚡️️Real-world coding tasks show faster execution, fewer tool calls, and lower costs Long-running autonomy: ⏱️Compaction enables multi-hour reasoning without hitting context limits ⏱️Codex automatically manages context windows to sustain progress More details at link in comments.

    • No alternative text description for this image
    • No alternative text description for this image
  • “In 2025, everyone can build fast. That’s the exciting part, and the scary part.” When we caught up with Ryan Carson, Christel Buchanan, Mihir Garimella, and Aakash Adesara at DevDay, we asked them what a real moat looks like when speed is baseline. Here were their key insights: 1️⃣ Taste matters. You can use the same ingredients, but get very different outcomes. 2️⃣ The shape of your product matters as much as the product itself: how does your solution fit into your users' workflows? 3️⃣ Go deep in one vertical. Solving one domain well beats going broad too early. 4️⃣ Keep customers at the center, using each release to deliver more value to real users who already care. We want to know: what's your moat in 2025?

  • An updated GPT-5.1 build guide is now available, designed by Hillary Bush & Prashant M. to be a practical reference for teams integrating GPT-5.1 into real workloads. The guide covers: 🟦 How to migrate to the Responses API while keeping orchestration lightweight 🟦 Ways to structure agent workflows so tool use, state, and error paths are easier to trace 🟦 A simple loop for prompt tuning and validating changes against evals 🟦 How reasoning effort and verbosity settings influence consistency and latency in practice 🟦 Operational habits that help maintain predictable costs as usage grows What are you finding as you start testing 5.1 in your stack?

  • Up and to the right hits different when it’s the Gartner Magic Quadrant.

    View profile for Giancarlo 'GC' Lionetti

    CCO @ OpenAI; former CRO @ Zapier, CMO @ Confluent, VP Growth @ Dropbox, Atlassian alum

    OpenAI was named an Emerging Leader in the 2025 Gartner® Innovation Guide for Generative AI Model Providers. It’s a nice milestone – and another sign of something we’re seeing every day: AI becoming core infrastructure for how work gets done. More than 1M businesses now use OpenAI, and they’re moving from experiments to deployment faster than ever – shipping agents, multi-modal workflows, and intelligent apps that drive real ROI. Grateful for the recognition, but honestly even more energized by what our customers are building. Thanks to everyone pushing the boundaries with us. More thoughts here: https://lnkd.in/gHe_Sue2

    • No alternative text description for this image
  • "GPT 5.1 is the first model I've tried that truly feels both instant and meticulous. It's turned the Augment Code agent into both a sprinter and a marathoner." Same agent, same prompt, same environment—two very different outcomes. We loved seeing 5.1 put to the test. What's your experience been?

    View organization page for Augment Code

    15,185 followers

    GPT-5.1 is now live in Augment Code. It's our strongest model yet for complex reasoning tasks, such as identifying and fixing bugs or complex multi-file edits. Rolling out to users now. We’re excited for you to try it!

  • A quick look at GPT-5.1 inside Warp, direct from Zach Lloyd, founder and CEO. In this workflow, GPT-5.1 picks up a specific PR comment, unifies the parsing logic for multiple Warp link types, generates the diffs, and validates the changes locally. ⚡ Understands PR context quickly 🧭 Stays steerable while following constraints and preserving intent 🛠️ Delivers reliable multi-file diffs for real refactors 🔁 Supports an iterative, stable loop to build, test, and verify Warp is making GPT-5.1 the default for new users. Curious what you’d throw at it first—debugging, refactors, or full-flow agent loops?

  • GPT-5.1 is now in the API. It’s faster, more steerable, and better at coding—plus, it ships with practical new tools. If you’re building apps or agents where intelligence, speed, and cost matter, GPT-5.1 should feel like a meaningful upgrade. ✨ Adaptive reasoning that uses fewer tokens on simple steps and spends more time when the task is hard ⏱️“No reasoning” mode so the model responds faster on simple tasks → it’s the default reasoning_effort setting 🗂️Extended prompt caching, which keeps prompts active for up to 24 hours—reducing cost and latency in long-running interactions. 🧭 A more steerable coding personality that produces cleaner diffs, shows clearer intent, and does less overthinking ⚡ Snappier interactive loops for quick edits, shell-style interactions, and iterative refinement 🧩 More predictable agent behavior that maintains task focus and handles long tool-call sequences consistently. We’re also shipping two new tools: apply_patch for reliable, freeform code edits, and shell tool to execute commands in controlled loops. Blog and resources in the first comment ↓

    • No alternative text description for this image
    • No alternative text description for this image
  • Join our next Build Hour: Agent RFT, led by Théophile Sautory (Applied AI) and Will Hang (Fine-Tuning), to see how teams are training reasoning models to use tools, handle context, and learn through real-time feedback. You’ll learn: 🧠 The key differences between Base RFT and Agent RFT 🔧 When to use Agent RFT — and why it requires the Responses API ⚙️ How to set up tasks with datasets, tools, and graders. You’ll leave with practical examples, ready-to-use resources, and a clearer path to frontier agent performance.

    Build Hour: Agent RFT

    Build Hour: Agent RFT

    www.linkedin.com

Affiliated pages

Similar pages