Your sales forecast is a lie. Last month I analyzed 50+ CRM instances and found the average forecast accuracy was just 46%. When I asked sales leaders why deals slipped, the answer was always the same: "The close date was unrealistic." The problem isn't your CRM. It's how it’s being used. Many sales teams are checking boxes and required fields for their leader knowing its not 100% what’s actually going on Here's the simplest CRM hack that has improved forecast accuracy by 40%+ for my clients: Stop using "close date" and start using "customer-voiced impact date." This tiny shift changes everything. When a rep enters a close date, they're guessing when they think a deal will close. When they enter a customer-voiced impact date, they're documenting when the prospect said they'll make a decision. The difference is massive. Here's how to implement this today: 1️⃣ Create a custom field called "Customer Decision Date" This is when the buyer has committed to making a decision. 2️⃣ Require documented evidence for any date "The CFO confirmed they need to decide by June 30th because..." 3️⃣ Track it alongside the rep's forecast date. This creates healthy tension between what the rep hopes and what the customer says. 4️⃣ Make it visible in pipeline reviews "The customer said they're deciding March 15th, but you're forecasting February 28th. Why?" Top sales teams keep these dates separate and review the gap. If there's no customer decision date with evidence, the deal doesn't belong in your forecast.
How To Handle Uncertainty In Sales Forecasts
Explore top LinkedIn content from expert professionals.
Summary
Sales forecasting is a critical business process, but uncertainty can make it challenging to predict outcomes. Understanding how to manage this uncertainty can lead to better decision-making and more reliable projections that align with business goals.
- Focus on customer-driven dates: Replace internal "close dates" with dates confirmed by the customer to reduce guesswork and improve forecast accuracy.
- Incorporate sensitivity analysis: Test how changes to key variables, like price or demand, affect forecasts to identify potential risks and prepare for different scenarios.
- Use ranges, not single numbers: Present forecasts as a range to account for uncertainty, offering a clearer picture of potential outcomes for decision-making.
-
-
“I ran an experiment showing positive lift but didn’t see the results in the bottom line.” I think we’ve all had this experience: We set up a nice, clean A/B test to check the value of a feature or a creative. We get the results back: 5% lift, statistically significant. Nice! Champagne bottle pops, etc., etc. Since we got the win, we bake the 5% lift into our forecast for next quarter when the feature will roll out to the entire customer base and we sit back to watch the money roll in. But then, shockingly, we do not actually see that lift. When we look at our overall metrics we may see a very slight lift around when the feature got rolled out, but then it goes back down and it seems like it could just be noise anyway. Since we had baked our 5% lift into our forecast, and we definitely don’t have the 5% lift, we’re in trouble. What happened? The big issue here is that we didn’t consider uncertainty. When interpreting the results of our A/B test, we said “It’s a 5% lift, statistically significant” which implies something like “It’s definitely a 5% lift”. Unfortunately, this is not the right interpretation. The right interpretation is: “There was a statistically significant positive (i.e., >0) lift, with a mean estimate of 5%, but the experiment is consistent with a lift result ranging from 0.001% to 9.5%”. Because of well-known biases associated with this type of null-hypothesis testing, it’s most likely that the actual result was some very small positive lift, but our test just didn’t have enough statistical power to narrow the uncertainty bounds very much. So, what does this mean? When you’re doing any type of experimentation, you need to be looking at the uncertainty intervals from the test. You should never just report out the mean estimate from the test and say that’s “statistically significant”. Instead, you should always report out the range of metrics that are compatible with the experiment. When actually interpreting those results in a business context, you generally want to be conservative and assume the actual results will come in on the low end of the estimate from the test, or if it’s mission-critical then design a test with more statistical power to confirm the result. If you just look at the mean results from your test, you are highly likely to be led astray! You should always be looking first at the range of the uncertainty interval and only checking the mean last. To learn more about Recast, you can check us out here: https://lnkd.in/e7BKrBf4
-
STOP making these 5 forecasting mistakes: ❌ #1 Only forecasting top-down Accuracy comes from eliminating bias. To get there, prepare both a top-down and a bottom-up forecast. The bottom-up looks at each project individually. The top-down starts with an end result in mind and works backward to arrive at individual business drivers. Combining both types reduces bias because top-downs tend to be aggressive, and bottom-ups are often too risk-averse since buffers are built-in at every layer. ❌ #2 No accountability If Finance runs the forecast independently, without business leaders feeling accountable for delivering it, it doesn’t add much value. And most likely, accuracy suffers as a result. The best forecasts are fully aligned with the business. For instance, the assumptions going into a revenue forecast should be co-developed with the sales or marketing teams, and the department heads need to be held accountable for delivering it. As a result, you get more than better accuracy: someone will take action when performance is off. ❌ #3 Assumption stacking The more uncertainty in your business, the fewer assumptions you should include in your forecast. If you add multiple variables on top of each other, their margin of error multiplies. Additionally, it’s much easier to analyze your business drivers if you isolate the variables. ❌ #4 Skipping sensitivity analysis It’s our job to quantify the risk of a forecast. That’s even more important when there is a lot of uncertainty. The easiest way to do that is by changing individual inputs and noting how much impact that has on the forecast. For example, if a change to the price sensitivity of only 5% impacts the revenue forecast by 25% then that’s a major risk you’ll need to call out. ❌ #5 Showing only point estimates Sometimes, analysts mistakenly assume ranges make it look like they aren’t confident in their forecast. However, a well-measured range is critical for two reasons: One, it shows the order of magnitude of uncertainty (i.e. risk) in the forecast. That means, your CFO knows what’s a conservative estimate to communicate to investors. And two, it enables scenario planning. For example, it allows leaders to plan contingency measures ahead of time if results are at the lower end of the range. In sum, to level-up your forecasts: 1️⃣ Remove bias by comparing top-down vs bottom-up 2️⃣ Create accountability by aligning the forecast with the business 3️⃣ Higher uncertainty requires fewer assumptions 4️⃣ Estimate the risk by running a sensitivity analysis 5️⃣Provide ranges instead of point estimates ----- 🛫 If you’d like to learn more from me: Subscribe to my weekly newsletter! Join 20,000+ Finance & Accounting professionals and get: ➢ 3 FP&A ideas from me ➢ 2 insights from others, and ➢ 1 infographic in your inbox ...every Tuesday. 👉 Subscribe at (free): https://lnkd.in/dredP3d5