Statsig’s cover photo
Statsig

Statsig

Software Development

Bellevue, Washington 37,653 followers

Accelerate product growth with integrated feature flags, experimentation and analytics.

About us

Statsig is the all-in-one platform for product growth, helping businesses use data to ship faster and build better products. Over 3,000+ companies like OpenAI, Microsoft, and Notion use Statsig to automate experiments, manage feature rollouts, and drill down insights with product analytics. Founded in 2021 by ex-Facebook engineers, Statsig offers 30+ SDKs and enterprise-grade tools for engineering, product, and data teams. Deploy in your data warehouse or use as a fully-hosted solution for seamless scalability and control.

Website
http://www.statsig.com
Industry
Software Development
Company size
51-200 employees
Headquarters
Bellevue, Washington
Type
Privately Held
Founded
2021
Specialties
Experimentation, Data Analytics, Feature Flag, A/B Testing, Product Analytics, Session Replay, and Web Analytics

Products

Locations

Employees at Statsig

Updates

  • View organization page for Statsig

    37,653 followers

    You've seen the logo, heard the name, used the product. But behind it all are the people who built it. The culture that shaped it. The story you don't often see - the people who show up, care about the craft, and build something that lasts. We captured some of it here. A glimpse behind the scenes. This is our story. Thank you to Vijaye Raji, Margaret-Ann Seger, Marcos Arribas, Timothy Chan, Tore H., Minhye Kim, Cat Lee, Jiakan Wang, and vineeth madhusudanan for sharing your perspective on camera, and to GeekWire for helping us bring it to life. And to the entire Statsig team - thank you for the story.

  • Statsig reposted this

    View profile for Gergely Orosz

    Deepdives on software engineering, tech careers and industry trends. Writing The Pragmatic Engineer, the #1 technology newsletter on Substack. Author of The Software Engineer's Guidebook.

    I am so excited to announce The Pragmatic Summit, in partnership with Statsig. 11 Feb, SF. One day. ~400 software engineers, engineering leaders, and founders. The topic: How is AI reshaping software engineering, dev workflows, and the modern engineering stack? Seats are limited, and tickets are priced at $499, covering the venue, meals, and production—we’re not aiming to make any profit from this event. Details on the first few speakers, and how to apply: https://lnkd.in/eYTCCX5N I hope to see many of you here!

    • No alternative text description for this image
  • We’re excited to share something we’ve been quietly working on for a while now. Statsig has been partnering with Gergely Orosz to help bring his first-ever in-person event to life: The Pragmatic Summit, happening February 11, 2026 in San Francisco. We’ve been longtime fans of Gergely’s work. The candor, clarity, and raw honesty in The Pragmatic Engineer has always resonated with us, especially his commitment to showing how real engineering gets done. This summit is an extension of that ethos. A day designed for engineering leaders who want conversations about how AI is reshaping software engineering, dev workflows, and the modern engineering stack. We’ll be sharing more details on speakers, sessions, and the agenda in the coming weeks. But for now: 📍 Save the date: February 11, 2026 📝 Applications to attend are now open! 💻 Learn more and apply here: https://lnkd.in/gFd3nYm9 We’re thrilled to be part of bringing this community together and to see Gergely’s conversations move from online to in-person.

    • No alternative text description for this image
  • It’s 2025. The world is moving faster with AI. But speed without precision leads to chaos. This month, we doubled down on strengthening our core tooling to power faster test → validate → learn → iterate loops. Today, we’re excited to launch a set of new features in our platform that help teams move even faster, with confidence! 1. Metric impacts for dynamic configs: Attach monitoring metrics directly to config changes to understand their impact instantly 📊 (available for Cloud & Warehouse Native customers) 2. Lower-environment testing: View metric results when testing in lower environments to catch issues before they reach production 🚀 (available for Cloud customers) 3. Sequential testing for feature gates: Get earlier signal from your feature rollouts and make decisions faster 🚅 (available for Cloud & Warehouse Native customers) 4. Metric alerting: Create topline alerts for sharp changes to key metrics from your metrics catalog ‼️ (available for Cloud & Warehouse Native customers) We believe that measuring the success of what you're shipping matters now more than ever, and we’re excited to keep delivering more powerful tools to help you move at the speed of AI 💨 Watch our announcement video with Margaret-Ann Seger, plus demos from Jairo Garciga, Kaz Haruna, Shubham Singhal, and Laurel Chan.

  • Pro-tip: Use conversion drivers to identify why users are dropping off 🧠 In this demo, Kamila M. shows how clicking any step of your funnel uncovers the statistically significant factors behind conversion. One click, and you know exactly which segments perform, which don’t, and where potential customers slip away.

  • Statsig reposted this

    View profile for Skye Scofield

    Head of Marketing & Operations @ Statsig

    The current generation of coding agents have finally unlocked "english as a programming language" - enabling anyone to go from plain text to a pretty functional PR. This has been huge for me personally. I had never coded before joining Statsig - but with coding agents, I’ve been able to build conversion pages, create growth experiments, and even land some console code (sorry Marcos) Agents have also mattered for our experienced engineers - slashing the time from idea → PR for small fixes and simple changes. It’s never been easier to land good code! But there’s a catch: even a well-engineered version of a “good idea” isn’t guaranteed to move your metrics in the right direction. As agents take on more and more coding, the blocker for how fast teams can move won’t be “can we ship it?” anymore… the real question will be “should we ship it?” Faster decision-making and iteration cycles will come down to how quickly you can get signal from the product changes you’re making. This comes down to having rigor about how you measure success - and having the right tools to track your progress. As I’ve started landing more and more code, Statsig has been invaluable for this - from setting up conversion funnels to implementing web analytics to gating AI-generated PRs behind a Statsig feature flag (just in case I made a mistake). This is another reason why I'm so excited about where Statsig is going. As AI ships more and more code, having great tools to measure the success of what you’re shipping will matter more than ever.

  • Pro Tip: Use Experiment Templates 🌟 Templates let you define and enforce a standardized framework for experiment creation - you can specify hypothesis requirements, metrics to measure, and the stats methodologies to use. Then team members can reuse these templates so everyone moves faster while still following org-wide standards 🚀

  • Statsig reposted this

    View profile for Emily Hallet

    Head of Product Marketing at Statsig

    I’m about a month into my time at Statsig — and I’ve spent most of it learning: the product, our customers, the shifting AI landscape, and how the best teams are organizing to win. One theme keeps coming up: speed alone is no longer a competitive advantage. Learning is. The pace of development is dizzying, and every leader I talk to is focused on the same challenge: how to move quickly with insight — not just velocity. The teams that learn fastest will be the ones that stay ahead. You can see this shift everywhere: • Conversations about hiring strategies circle back to curiosity and growth mindsets • Teams are reorganizing around faster cycles and stronger context-sharing • Leaders at companies like Brex and Figma are investing in tooling that attaches a learning loop to every release Some of the strongest teams I’ve worked with treated learning as seriously as shipping — whether it was early Uber Eats, where every week centered on what we were testing next, or at Meta, where “Understand” roadmaps had the same rigor as product roadmaps. That’s why my favorite conversations here have been with customers who’ve told us that Statsig didn’t just help them run experiments — it helped them build a learning culture that defaults to testing, not guessing. Excited to help more companies build the systems and culture to do exactly that.

  • Statsig reposted this

    In the age of AI, shipping fast is table stakes – shipping with experimental evidence wins. My PM career over the last decade has been in big data and I've seen how roadmaps could be steered with “strong intuition" and hard data.  Since joining Statsig, I’ve been continuously evolving my understanding of product management and software development lifecycle (SDLC). Learning alongside others within Statsig, I have been using AI agents (both homegrown and external) with our feature development platform to build faster, de-risk launches, and double down on what truly moves metrics. As builders, we know how fundamental Atlassian products are to SDLC and collaboration. Only recently did I grasp the scale of experimentation driving how they build. Atlassian’s leadership champions a culture of data and experimentation where teams iterate rapidly across features and AI agents, applying rigor that transforms intuition into data driven results. Similarly my feature development philosophy I now share is: - Feature flags (gates) = safe rollout + control - Experiments = causal evidence on impact - Analytics = measure outcomes & explain why You need all three: gate to ship safely, experiment to learn what works, then analyze to understand why and where to invest. This holds true too when building with AI or developing new Agents. 1. Gate every meaningful change that affect customers 2. Define evaluation grading rubrics and guardrail metrics up front 3. Evaluate outputs offline then promote to experiments for online data 4. Scale what wins, terminate what doesn’t - analyze, iterate, and repeat I'm curious to learn how and what you are building now? Comment below to trade Statsig customer stories and best practices. #productmanagement #experimentation #data #AI #featureflags #atlassian #statsig

    • How Atlassian scales experiments with Statsig over the last 2 years

Similar pages

Browse jobs

Funding

Statsig 3 total rounds

Last Round

Series C

US$ 100.0M

See more info on crunchbase