Forecasting without magic: a four-week rolling method
For most of last year, our quarterly forecast was wrong by an average of 22%. Sometimes high, sometimes low — never close. Here’s the simpler method that got us under 8%, and what changed in the team’s behavior to make it work.
The problem with stage-weighted forecasts
The default forecasting method at most B2B teams is some version of: take every deal in the pipeline, multiply its value by the probability assigned to its stage, sum the result. We did this for years. It produced a number every Friday. The number was almost always wrong.
The issue isn’t the math — it’s that the inputs lie. A “Stage 4 — Negotiation” deal at 70% probability often has a real probability of either 95% or 10%, and almost never 70%. Averaging these together produces a forecast that’s wrong on every single deal but right on the aggregate, except when it isn’t — which is whenever a small number of large deals concentrate at one end of the distribution. Which is most quarters.
The four-week rolling method
The method that worked for us has three rules:
- Forecast only what you expect to close in the next four weeks.
- For each deal, the rep writes one sentence explaining why it will close.
- The sentence is reviewed by a peer rep, not by the manager.
That’s the whole method. Stage probabilities are not used. Historical conversion rates are not used. The forecast is the sum of the deal values, with each deal’s confidence assessed individually by a sentence in plain English.
Why each rule matters
Four weeks, not a quarter
A 13-week forecast asks the rep to predict things they cannot predict. A four-week forecast asks them to predict things they have already had concrete conversations about. The accuracy difference between these is enormous. We track both numbers, and the four-week number is consistently within 10% while the quarter number drifts.
The sentence
The sentence is the part of the method that does most of the work. A good sentence sounds like:
“Procurement signed off Tuesday, legal redlines came back clean, target signature date is Friday.”
A bad sentence sounds like:
“Champion is excited and we’re aligned on next steps.”
The first sentence describes specific events that have already happened. The second describes feelings. The first kind of deal closes about 80% of the time at our team; the second kind closes about 25% of the time. Forcing reps to write the sentence makes the difference visible without anyone having to call out anyone’s deals.
Peer review, not manager review
This was the part we resisted longest and was probably the biggest unlock. When managers review forecast deals, the conversation has a power dynamic baked in — reps defend their numbers, managers push back. The result is theatre. When peer reps review each other’s sentences, the conversation is honest, because nobody’s comp depends on convincing the other person.
We do peer reviews in pairs every Wednesday, 30 minutes blocked on the calendar. Each rep reads the other’s deals and asks one question per deal: “What would have to happen for this not to close?” If the answer is “nothing, really, it’s a done deal,” we keep the deal in the forecast. If the answer is “the buyer’s VP hasn’t actually approved budget,” we move it out.
What we measure
Every Monday morning we report three numbers:
- Committed. Sum of all deals in the four-week forecast.
- Best case. Committed plus all deals where the sentence is “close, but one specific thing has to happen first.”
- Worst case. Just the deals where the sentence describes events that have already happened.
At quarter close, we compare actual revenue to all three. The pattern over four quarters: actuals consistently land between worst case and committed, closer to committed. Best case is almost never hit, which is fine — that’s what makes it best case.
What we stopped doing
We stopped using probability percentages on individual deals. They added no information and gave reps an easy way to hedge. We stopped doing “commit / upside / pipeline” categorization in the CRM — everyone’s definition was different.
We stopped doing the Friday all-hands forecast meeting. The information was already in the sheet by Wednesday and the meeting was just reading it out loud.
What we still get wrong
This method is bad at predicting unusually large deals. When a single $500k deal could swing the quarter, no method based on rep judgment is reliable. For those, we just track the deal in a separate row at the top of the forecast sheet, and the manager owns it personally. The rest of the pipeline forecast doesn’t pretend to know.
We’re also bad at predicting deals where the buying committee changes mid-process. There’s no fix for this except to accept the variance and hold a buffer.
Try this if
- Your forecast misses by >15% in either direction more than once a year.
- Your reps complain that “the CRM probabilities don’t mean anything.”
- Your forecast meeting feels like theatre.
Don’t try this if you have an enterprise pipeline where deal cycles are longer than 90 days — the four-week window doesn’t apply. For those, the equivalent method is a rolling-quarter forecast with the same sentence rule, but the calibration is different and we haven’t finished writing that one up yet.