Measuring Your AI's True ROI: The Hourly Effective Rate Approach Credit goes to Harald Røine for inspiring today's post. Here is the problem with most AI ROI calculators. They're using outdated metrics while missing the simplest, most powerful measurement right in front of them. Hourly Effective Rate (HER) is the metric every GTM leader should be using but almost none are. What is HER and why does it matter? The formula is simple: Revenue ÷ Total Hours Worked = HER This single metric reveals what your team's time is actually worth. And when you implement AI effectively, this number should dramatically increase. Let me illustrate with a real-world example (as shown in the attached image): BEFORE AI: 10 hours spent on a task (manual work, admin, review) $1,000 revenue generated HER = $100/hour AFTER AI: 4 hours spent on the same task (AI-assisted work with automated review) Same $1,000 revenue generated HER = $250/hour (150% increase) This isn't theoretical. I've seen this exact pattern across sales teams drafting follow-ups, marketing teams creating content, CS teams handling tickets, and CROs building reports. Why traditional ROI calculations fall short Most companies calculate AI ROI by looking purely at cost reduction. This misses the strategic opportunity. When you implement AI effectively, you're not just saving money. You're fundamentally changing what your team can accomplish with their finite hours. The real ROI formula should be: ((New HER - Old HER) × Task Volume × Time Period) ÷ AI Cost For example: A team sees a $150/hour HER improvement on 10 tasks per month. Over a year, that's $18,000 in value creation. If the AI costs $5,000, that's a 360% ROI. Three immediate applications for GTM leaders: Team Reallocation: Higher HER means you can redirect hours to high-value activities. What could your sales team accomplish with 60% more time for relationship building? Capacity Planning: Understanding your team's true HER helps you make better hiring decisions. You might discover you need different roles than you thought. AI Investment Prioritization: Apply the HER calculation to every workflow to identify where AI will deliver the highest returns. This isn't about headcount reduction. It's about maximizing the value creation potential of your existing team. The companies winning with AI aren't just working faster—they're fundamentally increasing their capacity to deliver value. Don't fall into the trap of measuring AI success through vague "efficiency gains." Calculate your team's HER before and after implementation, and you'll have the hard numbers you need to prove AI's strategic value. The GTM teams that understand and optimize for Hourly Effective Rate will outperform those focused solely on traditional efficiency metrics. They'll handle more deals, deliver higher quality work, and focus their human talent where it matters most. Stop guessing at AI's value. Start measuring it with HER.
How to Measure ROI on Technology Investments
Explore top LinkedIn content from expert professionals.
Summary
Evaluating the return on investment (ROI) for technology, particularly AI, involves measuring how well it improves business outcomes like revenue, time saved, or productivity. It’s crucial to go beyond cost savings by considering metrics that reflect strategic value and operational impact.
- Start with benchmarks: Assess your current processes by measuring metrics like time spent, revenue generated, or error rates before implementing the technology.
- Track changes over time: Calculate new metrics after deploying the technology, such as increased productivity, revenue per hour, or cost reductions, to validate its impact.
- Use specific ROI formulas: Employ frameworks like the Hourly Effective Rate (HER) or control group tests to identify measurable improvements and justify your technology investment.
-
-
Vendors say, “AI coding tools are writing 50% of Google’s code.” I say, “Autocomplete or IntelliSense was writing about 25% of Google’s code, and AI made it twice as effective.” When it comes to measuring AI’s ROI, real-world benchmarks are critical. Always compare the current state to the future state to calculate value instead of just looking at the future state. Most companies are overjoyed to see that AI coding tools write 30% of their code, but when they realize that vanilla IDEs with basic autocomplete could do 25%, the ROI looks less impressive. 5% rarely justifies the increased licensing and token costs. That’s the reality I have found with about half of the AI tools I pilot with clients. They work, but the improvement over the current state isn’t worth their price. I have used the same method to measure ROI for almost a decade. 1️⃣ Benchmark the current process performance using value outcomes. 2️⃣ Propose a change to the current process that introduces technology/new technology into the workflow. 3️⃣ Quantify the expected change in outcomes and value delivered with the new process/workflow. 4️⃣ Make the update and measure actual outcomes. If there’s a difference between expected vs. actual, find the root cause and fix it if possible. Measuring AI ROI is simple with the right framework. It’s also easier to help business leaders make better decisions about technology purchases, customer-facing features, and internal productivity initiative selection. I would rather see a benchmark like, percentage of code generated from text prompts vs. the percentage of code recommended by autocomplete. That benchmarks the reengineered process against the old one. AI process reengineering (AI tools augmenting people performing an optimized workflow) is where I see the greatest ROI. Shoehorning AI tools into the current process typically delivers a fraction of the potential ROI.
-
How to Measure AI ROI: A Step-by-Step Guide That Actually Works Most companies waste millions on AI without knowing if it works. Looking to maximize your AI investments? Here's your roadmap to success: Step 1: Define Clear Success Metrics • Revenue impact • Cost savings • Time saved • Customer satisfaction scores • Employee productivity gains Step 2: Implement the AI Decision Scorecard • Compliance checks • Quality assessment • Employee experience • Business impact measurement Step 3: Set Baseline Measurements • Current performance metrics • Cost of operations • Time per task • Error rates • Customer feedback Step 4: Track Progress • Weekly data collection • Monthly progress reviews • Quarterly ROI calculations • Stakeholder feedback • Performance adjustments Step 5: Scale What Works • Document successful use cases • Share wins across teams • Replicate winning patterns • Train more users • Expand implementation The Truth: Only 22% of companies measure AI ROI effectively. Don't be part of that statistic. Remember: If you can't measure it, you can't improve it. Ready to transform your AI investments into real results? Share your biggest AI measurement challenge below 👇
-
Measuring ROI for AI projects isn’t easy. But there’s a path to get there. I mentioned in my previous post that I’d write about it, so here it is. More to come as the field matures. This isn’t comprehensive and as usual it is a personal post. Let’s start with why this is hard. First, vendors are advertising aggressive productivity gains (25% anyone?) which isn’t clear they’ll pan out when implemented in your organization. Secondly, placing a new type of software in your org has a number of ramifications, so distilling from that the productivity gains isn’t easy. Third, CFOs want hard numbers but this seems to be more art than science, and hence judgments seem more apt than hard math (build trust with your finance leaders). Today, the community does not have a standard methodology for calculating ROIs. This reminds me of hard benchmarks we used to have, such as transactions per second for databases. So everyone is doing it somewhat differently, and it’s hard to compare apples to apples. These are the ROI methods I’m seeing, and they vary in maturity. First is AB measurement of productivity across different teams (for example for AI in software development productivity you can measure lines of code written). These vary between high in uncertainty (comparing only 20 people in each AB batch, and being done in artificial tasks by the vendor) vs highly reliable methods (done in your organization with 1000s of people, in the tasks they do day to day). Clearly you want to go to the latter, which is at scale and in naturalistic tasks. Secondly is retrospective perceived time savings surveys; these are useful for general productivity tools which are used across the org and have a diffuse impact on productivity. You basically ask your users how much time they’re saving and get an answer back. This allows you to get some view of productivity in naturalistic environments across a wide swath of people. Yes, they’re less reliable than AB testing but sometimes this is all you have. Third, you can measure before and after. This is a version of the AB testing above only you’re measuring time to complete task, NPS, quality before/after the software was introduced, NPS from customers, etc. The key aspects are to measure it in your organization (not what the vendor says “on the can”), to capture these numbers at a bit of scale (as opposed to only on the pilot), and to track this over time (to avoid the Hawthorne effect where novelty shows up as productivity). Be mindful that in different circumstances you’ll be able to apply only one of these methods, and that you will land on better numbers as you grow in scale and measure more naturalistic tasks (instead of artificial ones). I’d create an analytics team outside of the AI organization to keep the ROI more truth-oriented than advocate. This is common practice in big tech firms. Do you find this useful? Is there something big that I missed about measuring ROI? What do you think? #airoi #aistrategy #cfo
-
I just called out the billion-dollar AI lie in the Financial Times Agenda in my recent interview. Here’s the uncomfortable truth about AI most companies won’t admit... Most AI projects fail not because of the tech but because of the lack of data strategy. As an analytics executive turned AI consultant, here’s what I tell F500 boards: 1. The invisible silos are hurting your ROI. The head of AI vs. head of data are fighting turf battles. If your AI team and data team aren't in lockstep, you're leaving money on the table. 2. Vendor contracts deserve a closer look. Don't blindly trust your AI vendor, ever. You need to negotiate data protection clauses and train your team on what data should never be fed into the algorithm. 3. Measure what matters to show ROI. Ditch the hype and focus on metrics that drive business value. Move beyond "number of AI users" or "number of active users". How do you know if your AI is making an impact? I shared a couple of practical ideas in the article: (1) The Control Group Test. Compare a team using AI with a team using traditional methods. May the best team win (and show you the ROI!). (2) "No-AI" Days (like no-meeting Wednesdays) to measure real productivity gains. (3) Don't forget to do energy audits. That ChatGPT query? It burns 100x more carbon than a Google search! What's the biggest challenge you're facing with AI? Share in the comments! Ps. Share this with your CFO...they’ll owe you a coffee. Data With Serena™️