Developer Productivity Metrics

Explore top LinkedIn content from expert professionals.

  • View profile for Lenny Rachitsky
    Lenny Rachitsky Lenny Rachitsky is an Influencer

    Deeply researched product, growth, and career advice

    315,323 followers

    How to compare your eng team's velocity to industry benchmarks (and increase it): Step 1: Send your eng team this 4-question survey to get a baseline on key metrics: https://lnkd.in/gQGfApx4 You can use any surveying tool to do this—Google Forms, Microsoft Forms, Typeform, etc.—just make sure you can view the responses in a spreadsheet in order to calculate averages. Important: responses must be anonymous to preserve trust, and this survey is designed for people who write code as part of their job. Step 2: Calculate your how you're doing. - For Speed, Quality, and Impact, find the average value for each question’s responses. - For Effectiveness, calculate the percent of favorable responses (also called a Top 2 Box score) across all Effectiveness responses. See the example in the template above. Step 3: Track velocity improvements over time. Once you’ve got a baseline, you can start to regularly re-run this survey to track your progress. Use a quarterly cadence to begin with. Benchmarking data, both internal and external, will help contextualize your results. Remember, speed is only relative to your competition. Below are external benchmarks for the key metrics. You can also download full benchmarking data, including segments on company size, sector, and even benchmarks for mobile engineers here: https://lnkd.in/gBJzCdTg Look at 75th percentile values for comparison initially. Being a top-quartile performer is a solid goal for any development team. Step 4: Decide which area to improve first. Look at your data and using benchmarking data as a reference point, pick which metric you believe will make the biggest impact on velocity. To make this decision about what to work on to improve product velocity, drill down to the data on a team level, and also look at qualitative data from the engineers themselves. Step 5: Link efficiency improvements to core business impact metrics Instead of presenting these CI and release improvement projects as “tech debt repayment” or “workflow improvements” without clear goals and outcomes, you can directly link efficiency projects back to core business impact metrics. Ongoing research (https://lnkd.in/grHQNtSA) continues to show a correlation between developer experience and efficiency, looking at data from 40,000 developers across 800 organizations. Improving the Effectiveness score (DXI) by one point translates to saving 13 minutes per week per developer, equivalent to 10 hours annually. With this org’s 150 engineers, improving the score by one point results in about 33 hours saved per week. For so much more, don't miss the full post: https://lnkd.in/grrpfwrK

  • View profile for Mark O'Neill

    VP Distinguished Analyst and Chief of Research

    11,238 followers

    Has Amazon cracked the code on developer productivity with its cost to serve software (CTS-SW) metric? Amazon applied its well-known "working backwards" methodology to developer productivity. "Working backwards" in this case starting with the outcome: concrete returns for the business. This is measured by looking at the rate of customer-facing changes delivered by developers, i.e. "what the team deems valuable enough to review, merge, deploy, and support for customers", in the words of the blog post by Jim Haughwout https://lnkd.in/eqvW5wbi . This metric is different from other measures of developer productivity which look only at velocity or time saved. Instead, "CTS-SW directly links investments in the developer experience to those outcomes by assessing how frequently we deliver new or better experiences. Some organizations fall into the anti-pattern of calculating minutes saved to measure value, but that approach isn’t customer-centered and doesn’t prove value creation." This aligns with Gartner's own research on developer productivity. In our 2024 Software Engineering survey, we asked what productivity metric organizations are using to measure their developers. We also asked about a basket of ten success metrics, including software usability, retention of top performers, and meeting security standards. This allowed us to find out which productivity metric was associated most with success. What we found in our survey was that *rate of customer-facing changes* is the metric most associated with success. Some other productivity metrics were actually *negative associated* with success. But *rate of customer-facing changes* is what organizations should focus on. Sadly, our survey found that few organizations (just 22%) use this metric. I presented this data at our #GartnerApps summit [and the next summit is coming up in September: https://lnkd.in/ey2kpc2 ] Every metrics gets gamed. So I always recommend "gaming the gaming". A developer might game the CTS-SW metric by focusing more on customer-facing changes. But... this is actually a good thing. You're gaming the gaming. We will be watching closely how this metric gets adopted alongside DORA, SPACE, and other metrics in the industry.

  • View profile for Kashif M.

    VP of Technology | CTO | GenAI • Cloud • SaaS • FinOps • M&A | Board & C-Suite Advisor

    4,084 followers

    🛠️ Measuring Developer Productivity: It’s Complex but Crucial! 🚀 Measuring software developer productivity is one of the toughest challenges. It's a task that requires more than just traditional metrics. I remember when my organization was buried in metrics like lines of code, velocity points, and code reviews. I quickly realized these didn’t provide the full picture. 📉 Lines of code, velocity points, and code reviews? They offer a snapshot but not the complete story. More code doesn’t mean better code, and velocity points can be misleading. Holistic focus is essential: As companies become more software-centric, it’s vital to measure productivity accurately to deploy talent effectively. 🔍 System Level: Deployment frequency and customer satisfaction show how well the system performs. A 25% increase in deployment frequency often correlates with faster feature delivery and higher customer satisfaction. 👥 Team Level: Collaboration metrics like code-review timing and team velocity matter. Reducing code review time by 20% led to faster releases and better teamwork. 🧑💻 Individual Level: Personal performance, well-being, and satisfaction are key. Happy developers are productive developers. Tracking well-being resulted in a 30% productivity boost. By adopting to this holistic approach transformed our organization. I didn’t just track output but also collaboration and individual well-being. The result? A 40% boost in team efficiency and a notable rise in product quality! 🌟 🚪 The takeaway? Measuring developer productivity is complex, but by focusing on system, team, and individual levels, we can create an environment where everyone thrives. Curious about how to implement these insights in your team? Drop a comment or connect with me! Let’s discuss how we can drive productivity together. 🤝 #SoftwareDevelopment #Productivity #TechLeadership #TeamEfficiency #DeveloperMetrics

  • View profile for Nilesh Thakker
    Nilesh Thakker Nilesh Thakker is an Influencer

    President | Global Product Development & Transformation Leader | Building AI-First Products and High-Impact Teams for Fortune 500 & PE-backed Companies | LinkedIn Top Voice

    21,035 followers

    Step-by-Step Guide to Measuring & Enhancing GCC Productivity - Define it, measure it, improve it, and scale it. Most companies set up Global Capability Centers (GCCs) for efficiency, speed, and innovation—but few have a clear playbook to measure and improve productivity. Here’s a 7-step framework to get you started: 1. Define Productivity for Your GCC Productivity means different things across industries. Is it faster delivery, cost reduction, innovation, or business impact? Pro tip: Avoid vanity metrics. Focus on outcomes aligned with enterprise goals. Example: A retail GCC might define productivity as “software features that boost e-commerce conversion by 10%.” 2. Select the Right Metrics Use frameworks like DORA and SPACE. A mix of speed, quality, and satisfaction metrics works best. Core metrics to consider: • Deployment Frequency • Lead Time for Change • Change Failure Rate • Time to Restore Service • Developer Satisfaction • Business Impact Metrics Tip: Tools like GitHub, Jira, and OpsLevel can automate data collection. 3. Establish a Baseline Track metrics over 2–3 months. Don’t rush to judge performance—account for ramp-up time. Benchmark against industry standards (e.g., DORA elite performers deploy daily with <1% failure). 4. Identify & Fix Roadblocks Use data + developer feedback. Common issues include slow CI/CD, knowledge silos, and low morale. Fixes: • Automate pipelines • Create shared documentation • Protect developer “focus time” 5. Leverage Technology & AI Tools like GitHub Copilot, generative AI for testing, and cloud platforms can cut dev time and boost quality. Example: Using AI in code reviews can reduce cycles by 20%. 6. Foster a Culture of Continuous Improvement This isn’t a one-time initiative. Review metrics monthly. Celebrate wins. Encourage experimentation. Involve devs in decision-making. Align incentives with outcomes. 7. Scale Across All Locations Standardize what works. Share best practices. Adapt for local strengths. Example: Replicate a high-performing CI/CD pipeline across locations for consistent deployment frequency. Bottom line: Productivity is not just about output. It’s about value. Zinnov Dipanwita Ghosh Namita Adavi ieswariya k Karthik Padmanabhan Amita Goyal Amaresh N. Sagar Kulkarni Hani Mukhey Komal Shah Rohit Nair Mohammed Faraz Khan

  • View profile for Sam McAfee

    Helping the next generation of tech leaders at the intersection of product, engineering, and mindfulness

    14,523 followers

    Intelligent CEOs and CFOs should resist McKinsey's recent offer of a magical framework to measure engineer productivity. It sounds reasonable on the surface, but it will lead to worse outcomes for companies, not better ones. [There is an excellent write up of this issue by Kent Beck and Gergely Orosz, which you can find at The Pragmatic Engineer.] Put simply, productivity is the measure of output that can be produced for a given set of inputs. If you're making steel, you can calculate the expenditures on tools, equipment, plant and raw materials, as well as labor costs, compare that to a given amount of finished goods produced and sold, and measure the productivity of steel workers in Ohio compared with those in Argentina or Japan. These figures are comparable because the making of steel is a well-defined deterministic process. Making software is not. In software development, there isn't the same kind of correlation between the efforts, activities, and outputs of engineers and the value produced by software in the market. Engineers simply writing more code, or even shipping features more quickly, does not automatically lead to value for the customer or money in the bank for the business. What does matter in producing value for the business in software development is clear objectives, a high degree of alignment across teams, and a lot of flexibility and autonomy on the ground. Measures that supposedly track productivity will in fact hinder each of these, and lead to negative economic outcomes. The thinking behind such measures is the legacy of Frederick Winslow Taylor and the bygone era of industrial production. McKinsey originated in those years and was a big champion of Taylor's work across many industries. From what we've seen lately, they haven't changed at all. If you are the CEO or CFO of a growth stage tech company, follow their advice at your own peril. -- If you do want to probe your organization to see how you can improve outcomes instead of outputs, give our diagnostic tool a try. You can find it here (https://lnkd.in/gMZZyzkm)

  • View profile for Abi Noda

    Co-Founder, CEO at DX, Developer Intelligence Platform

    27,056 followers

    I see leaders getting stuck with “my developers are telling me X but my metrics are telling me Y”. Your developers are always right. Jeff Bezos recently shared, “when the anecdotes and the data disagree, the anecdotes are usually right. There’s something wrong with the way you are measuring.” In my experience, this is always true when it comes to developer productivity. I met with an organization that was looking into the impact of build times. Some of their developers said: “What are you talking about? I didn’t do a build.” Turns out that their build speed metrics included robotic background builds that had no impact on developers. Don’t focus on metrics without also consulting your developers. If the data and stories don’t line up, your developers are likely right.

  • View profile for Yegor Denisov-Blanch

    Stanford | Research: Software Engineering Productivity

    7,450 followers

    The best-performing software engineering teams measure both output and outcomes. Measuring only one often means underperforming in the other. While debates persist about which is more important, our research shows that measuring both is critical. Otherwise, you risk landing in Quadrant 2 (building the wrong things quickly) or Quadrant 3 (building the right things slowly and eventually getting outperformed by a competitor). As an organization grows and matures, this becomes even more critical. You can't rely on intuition, politics, or relationships—you need to stop "winging it" and start making data-driven decisions. How do you measure outcomes? Outcomes are the business results that come from building the right things. These can be measured using product feature prioritization frameworks. How do you measure output? Measuring output is challenging because traditional methods don’t accurately measure this: 1. Lines of Code: Encourages verbose or redundant code. 2. Number of Commits/PRs: Leads to artificially small commits or pull requests. 3. Story Points: Subjective and not comparable across teams; may inflate task estimates. 4. Surveys: Great for understanding team satisfaction but not for measuring output or productivity. 5. DORA Metrics: Measure DevOps performance, not productivity. Deployment sizes vary within & across teams, and these metrics can be easily gamed when used as productivity measures. Measuring how often you’re deploying is meaningless from a productivity perspective unless you’re also measuring _what_ is being deployed. We propose a different way of measuring software engineering output. Using an algorithmic model developed from research conducted at Stanford, we quantitatively assess software engineering productivity by evaluating the impact of commits on the software's functionality (ie. we measure output delivered). We connect to Git and quantify the impact of the source code in every commit. The algorithmic model generates a language-agnostic metric for evaluating & benchmarking individual developers, teams, and entire organizations. We're publishing several research papers on this, with the first pre-print released in September. Please leave a comment if you’d like to read it. Interested in leveraging this for your organization? Message me to learn more. #softwareengineering  #softwaredevelopment #devops

  • View profile for John Brewton

    Operating Strategist 📝Writer @ Operating by John Brewton 🤓Founder @ 6A East Partners ❤️🙏🏼 Husband & Father

    31,613 followers

    Stop Measuring Hours. Start Tracking Results. Google recently stated: “We measure results, not hours.” Yet, so many companies still cling to outdated metrics that reward time over output, leading to inefficiency, burnout, and disengagement. If you’re still tracking time in chairs instead of value created, here’s why it’s time to rethink your approach: ⛔ The Problem with Measuring Hours: ↳ More time ≠ More productivity: Sitting at a desk longer doesn’t mean better work gets done. ↳ Encourages inefficiency: Employees stretch tasks to fill time rather than optimizing for impact. ↳ Fosters burnout, not performance: Long hours for the sake of it = fatigue, poor decision-making, and turnover. ↳ Creates a culture of presenteeism: People stay late to “be seen” rather than produce meaningful work. ↳ Discourages smart automation: Employees may avoid efficiency tools if working faster isn’t rewarded. ↳ Fails to recognize different work styles: Some thrive in bursts, others in structured blocks, hours ignore this. ↳ Penalizes high performers: Someone who delivers top results in half the time shouldn’t be punished for efficiency. ↳ Shifts focus away from real impact: What matters isn’t how long something took, but what was achieved. ✅ The Power of Measuring Results: ↳ Drives true performance: Success is based on impact, not just attendance. ↳ Encourages ownership & accountability: Employees focus on what actually matters. ↳ Boosts engagement & morale: People work smarter, not just longer, leading to happier teams. ↳ Recognizes and rewards efficiency: Top performers get credit for results, not hours logged. ↳ Fosters creativity and problem-solving: Employees focus on finding the best solutions, not just “doing time.” ↳ Supports flexible and remote work: When results matter, employees can work when & how they perform best. ↳ Increases agility and adaptability: Teams focus on outcomes, making them quicker to pivot and innovate. ↳ Creates a high-trust, high-performance culture: People are measured on what they contribute, not how long they sit at a desk. 💡 The Bottom Line Your company’s or team’s success won’t be defined by hours logged, it will be defined by the quality of the work delivered and the value created. The best leaders and companies in the world know. Do you? Drop a comment below to share how you measure the results of the success you’re trying to achieve. 👇 _______ ➕ Follow me, John Brewton, for content that Helps. ♻️ Repost to your networks, colleagues, and friends if you think this would help them. 🔗 Subscribe to The Failure Blog, where we learn more from our failure to enable our success, via the link in my profile.

  • 𝗥𝗮𝗱𝗶𝗰𝗮𝗹 𝘁𝗵𝗼𝘂𝗴𝗵𝘁 𝗼𝗳 𝘁𝗵𝗲 𝘄𝗲𝗲𝗸: OKRs are a popular tool for driving results, but the side effects can be fatal to your product. How can you measure and drive progress more effectively? Let's first look at the 𝗸𝗻𝗼𝘄𝗻 𝗮𝗱𝘃𝗲𝗿𝘀𝗲 𝗲𝗳𝗳𝗲𝗰𝘁𝘀: 𝗚𝗼𝗼𝗱𝗵𝗮𝗿𝘁'𝘀 𝗹𝗮𝘄: When a measure becomes a target, it ceases to be a good measure. Charles Goodhart, British Economist, wrote about this in 1975 based on his observations in economics and finance. In 1976 David Campbell, a psychologist and social scientist expanded on this with Campbell's law. 𝗖𝗮𝗺𝗽𝗯𝗲𝗹𝗹'𝘀 𝗹𝗮𝘄: The more a given metric is used to 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗲 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲, the more it’s likely to be 𝗴𝗮𝗺𝗲𝗱 and the 𝗹𝗲𝘀𝘀 𝗿𝗲𝗹𝗶𝗮𝗯𝗹𝗲 it becomes as a measure of success. 𝗧𝗵𝗲 𝗮𝗱𝘃𝗲𝗿𝘀𝗲 𝗲𝗳𝗳𝗲𝗰𝘁𝘀 𝗼𝗳 𝗢𝗞𝗥𝘀 𝗵𝗮𝘃𝗲 𝗯𝗲𝗲𝗻 𝗸𝗻𝗼𝘄𝗻 𝗳𝗼𝗿 𝗱𝗲𝗰𝗮𝗱𝗲𝘀 and we've seen Goodhart's law and Campbell's law play out: when a teacher is measured by test scores, it leads to teaching to the test or even faking results, but it doesn't improve learning in a classroom. Other examples include law enforcement where the incentive sometimes is to increase arrest rates without improving security, and healthcare where sometimes doctors are incentivized to maximize their success rate metric by refusing difficult cases. Setting targets for product metrics through OKRs or using metrics to evaluate individual performance only guarantees that those metrics will be optimized (or gamed). The OKR approach tempts us to bolster numbers on a failed experiment rather than let go of that metric and try a different experiment altogether. To build successful products, we need to learn what's working in our product and what we need to do better. 𝗦𝗼 𝘄𝗵𝗮𝘁 𝘄𝗼𝘂𝗹𝗱 𝗺𝗮𝗸𝗲 𝗢𝗞𝗥𝘀 𝗺𝗼𝗿𝗲 𝗲𝗳𝗳𝗲𝗰𝘁𝗶𝘃𝗲? There's a special place in hell for people who name laws after themselves but here's 𝗗𝘂𝘁𝘁'𝘀 𝗹𝗮𝘄 𝗱𝗲𝗿𝗶𝘃𝗲𝗱 𝗳𝗿𝗼𝗺 𝗥𝗮𝗱𝗶𝗰𝗮𝗹 𝗣𝗿𝗼𝗱𝘂𝗰𝘁 𝗧𝗵𝗶𝗻𝗸𝗶𝗻𝗴. 😆 𝗗𝘂𝘁𝘁'𝘀 𝗟𝗮𝘄: Metrics can 𝗼𝗻𝗹𝘆 be effective in improving the product if they’re used towards 𝗰𝗼𝗹𝗹𝗮𝗯𝗼𝗿𝗮𝘁𝗶𝘃𝗲 𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴. Instead of setting targets or gauging individual performance through metrics, write hypotheses for every element of your Radical Vision and Strategy that you want to test, and measure to test those hypotheses. Then you can have regular discussions to share metrics and discuss what's working and where you want to course correct. Here's the link to the RPT book (Chapter 6 delves into this topic): https://lnkd.in/gZxpH2GM Share your experiences below with targets and OKRs! #product #radicalproductthinking

  • View profile for John Cutler

    Head of Product @Dotwork ex-{Company Name}

    128,355 followers

    Critique this (real) team's experiment. Good? Bad? Caveats? Gotchas? Contexts where it will not work? Read on: Overview The team has observed that devs often encounter friction during their work—tooling, debt, environment, etc. These issues (while manageable) tend to slow down progress and are often recurring. Historically, recording, prioritizing, and getting approval to address these areas of friction involves too much overhead, which 1) makes the team less productive, and 2) results in the issues remaining unresolved. For various reasons, team members don't currently feel empowered to address these issues as part of their normal work. Purpose Empower devs to address friction points as they encounter them, w/o needing to get permission, provided the issue can be resolved in 3d or less. Hypothesis: by immediately tackling these problems, the team will improve overall productivity and make work more enjoyable. Reinforce the practice of addressing friction as part of the developers' workflow, helping to build muscle memory and normalize "fix as you go." Key Guidelines 1. When a dev encounters friction, assess whether the issue is likely to recur and affect others. If they believe it can be resolved in 3d or less, they create a "friction workdown" ticket in Jira (use the right tags). No permission needed. 2. Put current work in "paused" status, mark new ticket as "in progress," and notify the team via #friction Slack channel with a link to the ticket. 3. If the dev finds that the issue will take longer than 3d to resolve, they stop, document what they’ve learned, and pause the ticket. This allows the team to revisit the issue later and consider more comprehensive solutions. This is OK! 4. After every 10 friction workdown tickets are completed, the team holds a review session to discuss the decisions made and the impact of the work. Promote transparency and alignment on the value of the issues addressed. 5. Expires after 3mos. If the team sees evidence of improved efficiency and productivity, they may choose to continue; otherwise, it will be discontinued (default to discontinue, to avoid Zombie Process). 6. IMPORTANT: The team will not be asked to cut corners elsewhere (or work harder) to make arbitrary deadlines due to this work. This is considered real work. Expected Outcomes Reduce overhead associated with addressing recurring friction points, empowering developers to act when issues are most salient (and they are motivated). Impact will be measured through existing DX survey, lead time, and cycle time metrics, etc. Signs of Concern (Monitor for these and dampen) 1. Consistently underestimating the time required to address friction issues, leading to frequent pauses and unfinished work. 2. Feedback indicating that the friction points being addressed are not significantly benefiting the team as a whole. Limitations Not intended to impact more complex, systemic issues or challenges that extend beyond the team's scope of influence.

Explore categories