DevSat survey: Measuring developer experience beyond system metrics

November 18, 2025 // 5 min read

image

System metrics show what is slow, but a deep-dive developer survey reveals why, turning hidden friction into actionable engineering insights.

DevSat Survey: Measuring Developer Experience Beyond System Metrics

The fastest way to slow down innovation is hidden friction in your engineering workflows. At GitHub, we know system data tells us only part of the story. To build an environment where developers can thrive, we also need to understand how developers experience their day-to-day work.

That’s why we run the Developer Satisfaction (DevSat) survey, which reveals perception-based signals and pain points that our tools can’t yet see. These allow us to address friction in engineering workflows before it slows down innovation.

Why developer experience needs both system and human signals

System metrics — like deployment frequency or change failure rate — are essential. But they don’t always reveal why developers feel slowed down or feel stressed. Further, to continuously improve our developer experience, we need a clear picture of where engineers feel bottlenecks so we can prioritize improvements with the highest impact.

To balance the picture, our Engineering System Success Playbook (ESSP) emphasizes two complementary data types:

  • System-derived data: e.g. build times, release frequency

  • Perception-derived data: e.g. developer tooling satisfaction, ease of debugging

Both are necessary to understand engineering productivity at scale and with this in mind, we recently switched our Developer Satisfaction (DevSat) survey tooling, which allowed us to change our approach of how we send out the survey: by sending it as a direct message (DM) integrating it into our engineers daily workflows rather than an email. The result: a 95% participation rate, giving us statistically significant input from almost every engineer at GitHub. That level of engagement gives us confidence that the results don’t just reflect isolated opinions, but reliable insight about where friction exists across our engineering organization.

“It’s actually when quality, velocity, and developer happiness are working in unison that organizations see their best results.” ESSP

How we run DevSat

To establish a baseline, we run our Developer Satisfaction (DevSat) survey quarterly. It is intentionally lightweight, with just a few rotating questions to keep the duration between five to 10 minutes. This and embedding the survey into engineers’ workflows helped us achieve a 95% participation rate, giving us one of the clearest, most representative pictures of developer experience across the company.

The survey covers end-to-end developer workflows, including:

  • Build and test experience
  • Ease of release
  • Development environment setup
  • Debugging workflows
  • Balance of deep work vs. meeting-heavy days

Because engineers work across very different domains — from AI research to database infrastructure — we also ask questions about their focus areas. This helps us uncover where friction happens across tools, teams, and handoffs.

Turning data into insights

Now that we’ve gathered both system and human signals through our 2025 DevSat survey, the next step is analyzing this information to understand where our biggest pain points lie. By looking at perception data alongside system metrics, we can make informed decisions about where to focus our investments to improve developer experience. Before deciding on specific actions, we use the following approaches to turn raw responses into actionable insights.

  • Industry benchmarking: We expected meeting load to be a major issue for a remote, global company. But our most recent survey showed GitHub engineers reported 19% fewer meeting-heavy days than the industry benchmark, freeing us to focus on bigger challenges.

  • Differentiating systemic patterns from team variations: In DevSat we saw Platform teams maintaining older tooling that carries higher Keep the Lights On (KTLO) burden than feature teams, which often spend around 66.6% of their time on new capabilities compared to an industry P90 of 65.5%.

  • Crisis scenarios: Occasionally, the survey uncovers extreme outliers, such as teams performing significantly below or above industry benchmarks (e.g., -10 or +22 points). These situations give us a chance to dig deeper and understand the underlying causes. Sometimes, lower scores reflect the complexity of the systems a team maintains, while higher scores can highlight strong practices—like effective incident handling or cross-team collaboration—which often rely on clear playbooks, supportive peers, and easy access to the right information. By investigating these outliers, we can make informed decisions about where to prioritize investments and improvements across the organization.

devsat

Beyond benchmarks: Contextualizing impact

Not all friction comes from everyday workflows — sometimes, major company investments reshape the developer experience in unexpected ways. Running DevSat quarterly helps us track how these shifts affect engineers, and whether we’re moving in the right direction, so we can distinguish between the unavoidable “cost of doing business” and the areas where strategic investment lets us rebalance workloads toward innovation.

For example, while building GHEC with data residency, we needed to support a customer experience that spans multiple regions, without slowing down our ability to ship. That meant re-architecting both our tooling and our developer workflows.

To solve this, we built a Developer Portal where engineers could manage feature flags across all regions through a single UI, rather than juggling multiple user experiences per region. This design ensured compliance with data residency guidelines while keeping the developer workflow simple and fast. The result was a CSAT score of 80–83 for those tools, showing that a careful UX investment turned what could have been extra overhead into a positive developer experience.

devsat2

Avoiding anti-patterns

One of the most important principles in using DevSat is avoiding direct score comparisons across teams, because it can create the wrong incentives leading to so called Anti-patterns (see ESSP for common anti patterns). For example, an AI R&D team’s release process is completely different from a database infrastructure team’s, and both are valid. Instead of raw comparisons, we use DevSat to:

  • Identify systemic challenges that affect multiple teams or the organization as a whole.

  • Spot outliers within a team or workflow that signal localized friction.

  • Enable context-driven conversations to surface root causes and determine where interventions will have real impact.

By pairing these insights with lightweight experiments and MVPs, we can check whether interventions actually move the needle on key outcomes.

Lessons learned

Running DevSat quarterly has reinforced a few guiding principles:

Insight Why it matters
Use both system and human-derived data A holistic picture of organizational health to derive the most impactful actions
Embed surveys in existing workflows to drive higher participation rates Increases signal quality and nuanced understanding
Benchmark for context Helps set priorities beyond internal norms
Avoid cross-team score comparisons Preserves meaningful work variance
Revisit metrics regularly Keeps you aligned with business shifts and AI acceleration

Following the ESSP to redesign how we measure success allowed us to identify not only where friction exists, but also where targeted interventions will have the greatest impact. DevSat now continues to serve as a baseline for continuous improvement, helping us invest in automation, optimize workflows, and enable engineers to spend more time delivering value rather than managing overhead.

Looking forward

Developer experience is never static. By running DevSat alongside system telemetry and using the ESSP framework, we’ve built a new baseline for continuous DX improvement.

Next, we’ll:

  • Track how our investments shift metrics over time, as we automate more work
  • Ensure new tools and regions scale sustainably without adding friction
  • Keep rebalancing workloads toward innovation as the pace of AI acceleration continues

The combination of system data and human insight gives us a clearer picture of engineering health — and helps us focus on improvements that matter most to developers.


👉 Want to learn more about the Engineering System Success Playbook? You can read the GitHub ESSP here.

Tags