Velocity feels precise, but it mostly measures how a team scores its own work. Cycle time connects directly to customer-facing outcomes: when something started, when it finished, and how long it really took. If you have ever missed a delivery date despite hitting your velocity target, the gap between those two things is the reason. You will walk away from this article with a clear way to replace velocity-driven planning with cycle-time forecasting you can run in 30 minutes.
So Which One Should You Actually Track?
Velocity measures output in points, but points do not map to calendar time, lead time, or delivery risk. The number goes up when teams score higher, when they split stories finer, or when they extend sprint length. None of those changes move a feature to users any faster.
Consider two teams that both average 40 points per sprint. Team A ships weekly. Team B ships monthly. Same velocity, but over 8 weeks Team A has shipped 8 times and Team B has shipped twice. The velocity number cannot tell the difference. That is the core problem.
When leaders use velocity to promise dates, "at 40 points per sprint we'll finish by Q3", they are treating a scoring metric as a delivery signal. It is not. At best, velocity is a useful internal capacity signal for planning how much work a team might take on in a given sprint. The moment it leaves the team as an external commitment, it starts lying. For a deeper look at why this happens and what metrics actually belong in leadership conversations, the guide to Agile metrics that actually matter covers the full picture.
Cycle Time Gives You A Date-Driven Signal
Cycle time is a time measure. That makes it directly usable for delivery forecasting in a way that story points never will be. When you know that your team's 85th percentile cycle time is 12 days, you know that 85% of similar items finish in 12 days or less. That is a statement you can take to a stakeholder and defend.
The most common mistake teams make is planning off the average. If the average cycle time is 8 days, a team will commit to delivery dates built around 8-day assumptions. But averages hide the tail. An item that runs 25 days is still in the average. The 85th percentile captures that tail and gives you a planning guardrail that accounts for real-world variability, blocked dependencies, scope creep, review cycles, and all the other things that stretch work beyond the median.
The practical shift is simple: stop committing to external dates based on average anything. Use percentiles. Choose 70% for internal planning targets where a miss is manageable, and 85% for commitments that trigger downstream dependencies, customer deliveries, or contractual milestones.
The Three Charts That Outpredict Story Points
You do not need a new tool stack to do this. Three views, built from your existing issue tracker data, expose predictability, risk, and hidden queues better than any burn chart or velocity report.
1. Cycle Time Scatterplot. Plot each completed item against the date it finished and the number of days it took. Over 60 days, a healthy team shows a tight horizontal band, say 3 to 5 days with occasional outliers. A team with a systemic problem shows a widening band: 3 to 5 days on the left, 3 to 25 days on the right, even if velocity has looked "stable" the entire time. The scatterplot shows volatility that aggregate metrics hide.
2. Aging WIP. For every item currently in progress, track how many days it has been active. If your median cycle time is 6 days, any item older than 12 days (2x the median) is an escalation signal, it is already an outlier and will only get worse if left unaddressed. Aging WIP is where blocked items and scope sprawl show up before they become sprint carryover. For more on WIP limits and how they interact with cycle time, the strategies for reducing cycle time in software teams goes into the implementation details.
3. Throughput Run Chart. Count the number of items completed per week and plot it over time. A stable team shows a consistent band, say 8 to 10 items per week for 6 weeks. A "big sprint" that was point-heavy might show only 4 items completed that week even though the team felt busy. Throughput reveals delivery pace. Combined with cycle time percentiles, it gives you the inputs for a real forecast.
The routine is a 30-minute weekly review using all three charts. Look at the scatterplot for new outliers, check aging WIP for items that need intervention, and read throughput for any dip that warrants a replan conversation. That habit is worth more than any retrospective action item about improving velocity.
A 30-Minute Forecast Method Leaders Can Trust
If you have cycle time history, you can forecast delivery dates with fewer assumptions than story points require. Here is the method in five steps.
1. Define Work Item Types. Pick one or two item classes, for example, "small change" versus "feature", and do not mix them in one forecast. Cycle time distributions are only comparable within the same type of work. Mixing them creates noise that makes percentiles unreliable.
2. Pull A Clean Sample. Use the last 30 to 60 completed items per class. If you only have 15, say so explicitly and widen your risk buffer. Thin samples produce wide confidence intervals and should be communicated that way.
3. Choose A Risk Percentile. Use the 70th percentile for internal targets where a miss is recoverable. Use the 85th percentile for external commitments. If the 70th is 9 days and the 85th is 12 days, you now have a concrete conversation: "we can commit to 9 days with moderate confidence or 12 days with high confidence, which risk level does this stakeholder need?"
4. Forecast By Count, Not Points. If throughput is 9 items per week and you need to deliver 72 items, you have roughly 8 weeks of capacity. Apply cycle time percentiles to estimate when the last items finish. This is more honest than summing story points and dividing by velocity, it uses your actual delivery history. For teams that want to make this calculation fast and visual, the ROI and forecasting tools on the tools page give a starting point.
5. Set A Replan Trigger. Do not wait for the date to slip. Reforecast automatically when WIP exceeds a defined limit, for example, more than two items per person, or when throughput drops 30% week-over-week from the baseline. A replan trigger turns a surprise into a managed conversation.
Most teams forecast from estimates and hope execution matches. What works is forecasting from how the system actually performs, then managing WIP actively to keep it that way.
When Velocity Still Helps, And How To Keep It From Lying
Velocity is not useless. Inside a stable team with consistent story sizing, it can support short-term capacity conversations. "Last sprint we took on 35 points and finished 33, let's plan to 33 again." That is a reasonable internal signal. The problem starts when it crosses the team boundary and becomes a delivery promise.
A common pattern: a team "improves" from 30 to 45 points over 4 sprints by splitting stories more finely. Cycle time stays flat at 10 to 12 days. Release frequency does not change. Stakeholders celebrate rising velocity while delivery pace has not moved at all. This is estimation theater, and it is directly connected to the kind of backlog dysfunction described in why your backlog is lying to you.
Three guardrails keep velocity from becoming actively misleading.
1. Keep Points Private. Do not compare teams on velocity. If you need a cross-team comparison, use throughput per week instead, it is objective and comparable. Team A delivering 9 items per week and Team B delivering 6 items per week is a real measurement. Team A at 42 points and Team B at 38 points is not.
2. Track Split Rate. If the average stories per sprint increases by 50% and velocity rises, treat that as a scoring change until cycle time improves to confirm the gain is real. Real improvement shows up in both metrics. Scoring changes show up in velocity alone.
3. Tie Planning To Cycle Time. Require every external date commitment to include a cited percentile cycle time and a throughput window. "Based on an 85th percentile cycle time of 11 days and a throughput of 9 items per week over the last 6 weeks, we're committing to this delivery." That is a defensible forecast. A velocity-to-date calculation is not.
If you want to set up a cycle-time scatterplot, an aging WIP view, and a 30-minute weekly review cadence for your team, that is exactly the kind of practical implementation I run in an engagement. The percentile method above can be applied to data you already have, you do not need to wait for a new tool or a transformation program to start forecasting better. Explore the services page to see how this fits into a broader engagement, or Book a Strategy Session and I will look at your team's actual data in the first conversation.