Case StudyKanbanCycle TimeMonte Carlo Forecasting

How a Team Cut Cycle Time by 42%: A Kanban Case Study

Maykel GomezDecember 18, 20258 min readShare

A newly formed software team at an energy technology company had volatile delivery, high WIP, and standups that generated updates but never moved work. The team was capable. The system around them was not. Over a six-month engagement, I helped reduce average cycle time from 40.72 days to 18.76 days, cut the 85th percentile from 62 days to 36 days (a 42% reduction), and increase throughput from 13.33 to 21.25 items per month, a 59% increase in delivery pace. This article walks through what changed, week by week, and how it showed up in the data. More results from similar engagements are in the case studies section.


Starting Point: A Forming Team With No Fair Baseline

This was a team that had been together for only a few months. Early-stage teams produce noisy data, work items are often inconsistently sized, the board workflow is still being negotiated, and cycle time reflects organizational friction as much as team performance. That is normal. It does not mean you wait for stability before measuring.

The baseline at the start of the engagement told a clear story: average cycle time 40.72 days, median 32 days, 85th percentile 62 days, 98th percentile 105 days, throughput 13.33 items per month. On the scatterplot, the distribution was wide, some items finished in under 10 days, others ran past 80. That kind of spread is not a capacity problem or a talent problem. It is a system problem. Wide cycle time distributions signal unpredictable handoffs, uncontrolled WIP, and work entering the system before it is ready.

Most teams wait for "stability" before they start measuring, under the assumption that early data will mislead them. What works is measuring now, using percentiles to handle the variability, and treating the baseline as a starting line rather than a verdict. The guide to Agile metrics that matter covers why percentiles are the right lens for noisy early-stage data.


The First Fix: Make The Standup Move Work To The Right

The standup I observed in week one was 20 minutes, 9 people, and produced zero decisions. Everyone updated in turn, the board was not referenced, and items that had been sitting for 25 or 30 days were never named. The meeting was optimized for information sharing, not for removing friction from the work.

I changed one rule: every item discussed in the standup must either move to the right column before the meeting ends, get a named next step assigned to a specific person, or get help assigned from someone who can unblock it. That rule alone changed the tone in the first three days. Stuck items became visible. People stopped reporting on their work and started making decisions about it.

To make aging visible, we tracked one number daily: the count of items older than twice the median cycle time. With a median of 32 days, that threshold was 64 days. It started at 4 items. Within two weeks it was down to 1. When the median improved to 20 days, we tightened the threshold to 40 days and the scrutiny continued.

Most standups are structured to reinforce individual silos, each person's work in isolation. What works is a flow-first format that surfaces stuck work before it becomes sprint carryover or a missed date. The standup article covers the full script, but the version we ran here is the same logic applied in the first two weeks.


The Second Fix: Lower WIP Without Starting With Hard Limits

At baseline, average WIP was running above 20 items. For a team delivering 13 items per month, that means the average item competed with 19 others for attention on any given day. Little's Law makes the outcome predictable: high WIP inflates cycle time directly.

Imposing a hard WIP limit on a team that has never operated with one tends to generate compliance theater, the numbers on the board say 14, but the real WIP is still 20 because people have items in their heads that never made it onto the board. I started differently.

Every time someone finished an item, I asked a single question: "Who can I help finish something today?" That question redirected attention from starting new work to finishing existing work. Over four to six weeks it built the habit. People started looking for help opportunities rather than pulling new items automatically.

We ran a weekly WIP snapshot, counting everything actively in progress, and tracked it over time. The shift was visible by week five. Average WIP dropped from above 20 toward 14. Throughput responded: 13.33 items per month at baseline grew to 21.25 by the later checkpoint, a 59% increase. Cycle time followed. For the specific tactics that accelerate WIP reduction, soft limits, aging WIP reviews, explicit pull policies, the cycle time reduction guide covers each in detail.


Make Policies Explicit So Work Stops Getting Stuck By Surprise

The biggest source of hidden cycle time in this engagement was not development work. It was handoffs without rules. Work moved into the "in review" column and stayed there indefinitely because there was no policy for how long review should take, no escalation trigger, and no explicit definition of what "done" meant for each column.

We wrote explicit policies for each board column and attached them to the board as a linked document. The structure was consistent across columns:

  1. Entry criteria. What must be true before an item enters this column? For development: design finalized, API spec confirmed, dependencies acknowledged. For review: code complete and self-reviewed, test cases written.
  2. Exit criteria. What must be true to move to the next column? For review: at least one approval from a named reviewer, no open blocking comments.
  3. Blocked rule. Any item in a blocked state for more than 24 hours gets named at the next standup and gets a resolution owner assigned that day.
  4. Review SLA. Code review: 48 hours maximum from submission. Design review: 72 hours maximum. Both tracked and visible on the board.

The policies did not require a new tool. They lived in a shared doc. What they did was eliminate the most common source of queue time: work sitting in a column with no clarity on what needed to happen next. Within three weeks of implementing column policies, average time spent in the review queue dropped by roughly 40%.


Use Flow Metrics And Monte Carlo To Reset Forecasts Leaders Can Trust

By the end of the improvement window, the numbers had moved substantially. Average cycle time: 18.76 days (down from 40.72). Median: 15 days (down from 32). 85th percentile: 36 days (down from 62). Throughput: 21.25 items per month (up from 13.33).

The metric that mattered most to leadership was not any of those numbers directly, it was the forecast. We ran a Monte Carlo simulation at baseline using throughput and cycle time data. The simulation put delivery of the remaining backlog on a multi-year horizon. That number had been driving escalating stakeholder conversations for months.

After 90 days of system changes, we reran the simulation with updated data. The new forecast moved to within the same year. The team had not grown. No new tools were purchased. The work itself had not changed. What changed was how reliably and quickly work moved through the system.

Most teams report improvement with qualitative language, "we feel faster," "the team is more focused." What works is showing percentile improvements alongside a forecast shift tied to actual delivery dates. That is the evidence that converts a leadership audience from skeptical to supportive. The cycle time reduction guide includes the framework for running this kind of data conversation.


Not every engagement goes this smoothly, but when a team actually commits to WIP limits, the math tends to cooperate.

See both case studies and their results → Case Studies

The standup change, the WIP habit, the column policies, and the Monte Carlo forecast are all repeatable. They do not require a specific methodology, a new platform, or a transformation program. They require a system view and the discipline to measure the right things. Explore the services page to see how this diagnostic and workshop approach is structured, or Book a Strategy Session and I will start with your data, the same way I started with this team.

Case StudyKanbanCycle TimeMonte Carlo Forecasting
Share on LinkedIn

Apply These Ideas

Want to apply these ideas to your team?

Book a Strategy Session for a focused conversation about your team’s next steps.

Chat