KanbanFlow MetricsExecutive Reporting

The Service Delivery Review: The Kanban Cadence Not Many Talk About

Maykel GomezJanuary 28, 20268 min readShare

Most organizations I work with can tell you what happened in the last sprint. Fewer can explain what happened to the delivery system over the last month. That gap is exactly where the Service Delivery Review belongs.

A Service Delivery Review is one of the Kanban Method cadences, and it is still one of the least used. It is a periodic review, usually monthly, where teams and leaders inspect delivery system health using flow data: cycle time trends, throughput, demand vs. capability, aging work, and reliability signals. It is not a status meeting, and it is not a sprint review with new labels. It is a system review.

When teams skip this cadence, coaching stays invisible. Improvements happen in pockets, but leadership does not see the pattern, does not understand the constraint, and cannot make structural decisions with confidence.

What a Service Delivery Review Is (and Is Not)

A sprint review asks, "What did we ship?" A Service Delivery Review asks, "How is the system that ships work performing?"

That distinction sounds small, but it changes the conversation:

  • Sprint review: output and stakeholder feedback on completed work.
  • Service Delivery Review: predictability, flow health, demand patterns, and risk trends over time.

If your team is shipping, but cycle time keeps rising, that is not a sprint review issue. If half the capacity is going to expedited work, that is not a sprint review issue. If aging work is creeping up every month, that is not a sprint review issue. Those are system issues, and this cadence is designed to surface them.

Why Most Teams Skip It

In practice, I see three reasons:

  1. The data pipeline is weak. Teams have Jira or Azure DevOps data, but no reliable views for cycle time distributions, throughput trends, or aging items.
  2. The format is unclear. Leaders know they need "better reporting" but do not have a repeatable structure for a 30-45 minute system review.
  3. It gets conflated with delivery demos. When this meeting turns into status updates, it loses value quickly.

The result is predictable. Teams keep improving locally, leaders keep asking for confidence they do not have, and everyone feels misaligned even though the work is real.

What Goes Into a Strong Service Delivery Review

I use a straightforward format that stays consistent month to month.

1. "Why are we here?" framing (2 minutes)

Start with a single sentence: this is a transparency, inspection, and adaptation cadence for the delivery system. Not a blame session. Not a sprint recap. The purpose is to see whether the system is getting healthier and where leadership decisions are needed.

2. System visualization (5 minutes)

Show where this team or program sits in the larger value chain:

  • Upstream request sources
  • Workflow boundaries
  • Handoffs and external dependencies
  • Where expedited demand enters

Without this context, metrics look abstract. With it, leaders can connect data to operational reality.

3. Cycle time distribution and percentile view (8 minutes)

Use scatter plots and percentile bands, not averages alone. Review:

  • 50th percentile for typical delivery behavior
  • 85th percentile for planning commitments
  • Outlier patterns by workflow stage or work type

This is where you move from "we feel slower" to "we know where and how predictability degraded."

4. Throughput trend and delivery rate (5 minutes)

Show throughput by week or month and compare trend direction against cycle time. Throughput alone does not fully show system health, but paired with cycle time it reveals whether you are improving flow or just pushing volume.

5. Demand vs. capability breakdown (8 minutes)

Split incoming demand into planned versus unplanned/expedited work. Then inspect ratio movement over time. This is one of the most useful leadership levers because it exposes whether teams are using capacity for forward value or reactive recovery.

6. Aging work and current risk (5 minutes)

Highlight items exceeding aging thresholds. This gives an immediate risk view and clear follow-up actions before risk becomes slippage.

7. Decisions and experiments (5 minutes)

End with explicit outcomes:

  • Decision requests for leadership
  • Team-owned experiments for next month
  • Owners and review date

No decisions means no real review.

Where Coaching Becomes Visible to Leadership

This cadence is where coaching translates into strategic clarity. You can redesign standups, tighten WIP policies, and improve replenishment decisions, but if leadership cannot see the system signal, those improvements look anecdotal.

In one engagement, monthly Service Delivery Reviews were presented to SVP-level technology leadership and became the primary mechanism for making delivery improvement visible across the organization. We were not presenting "activities completed." We were presenting system behavior: cycle time movement, demand shifts, and forecast confidence.

That is the difference between local improvement and enterprise trust.

The Minimum Data You Need to Start

Teams often wait for a perfect dashboard. You do not need one.

At minimum, gather:

  • Completed date and started date for cycle time
  • Weekly throughput count
  • Work item type tags for planned vs. unplanned
  • Current WIP and aging in progress

If you have that, you can run a useful Service Delivery Review in a slide deck or a basic Power BI report. If you are starting from zero, begin with two months of historical data and improve report quality as you go.

Who Should Attend

Keep attendance intentional:

  • Team leads or delivery managers
  • Product/program leadership
  • Engineering leadership representative
  • Coach/facilitator
  • Optional: operations or platform representative when dependencies are significant

The goal is not broad attendance. The goal is decision-making attendance.

How Long It Should Take

For most organizations, 30-45 minutes monthly is enough. If you need 90 minutes, the format is usually bloated. If you need 15 minutes, you are likely skipping the demand and decision layers that matter.

Consistency beats duration. A clear 40-minute monthly cadence creates more improvement than an occasional two-hour deep dive.

Practical Starting Template

If you need a first-run template, start with this sequence:

  1. Purpose and context (2 min)
  2. Cycle time + percentile movement (8 min)
  3. Throughput + delivery rate (5 min)
  4. Demand vs. capability ratio (8 min)
  5. Aging risk (5 min)
  6. Decision requests and experiments (10 min)

Repeat this structure for three months before changing it. The repeatability is what allows trend conversations and executive confidence to build.

Final Thought

Teams do not improve predictability because they "work harder." They improve when the system is made visible, reviewed with discipline, and adjusted through explicit decisions.

That is what the Service Delivery Review gives you.

If you want the broader context first, start with the Kanban complete guide. If you want to see outcome patterns from real engagements, review the case studies. If you want this cadence installed and operationalized with your leadership team, the Executive Advisory service is where I usually run it first.

KanbanFlow MetricsExecutive Reporting
Share on LinkedIn

Apply These Ideas

Want to apply these ideas to your team?

Book a Strategy Session for a focused conversation about your team’s next steps.

Chat