An Improved Agile Completion Rate Metric

As mentioned in my last post, we’re changing our focus from productivity to predictability until we can actually predict how long releases are going to take.  I still believe that we need a single true metric for productivity, but until we have some predictability, our productivity numbers are too shaky to provide any guidance to teams as they look to improve.  I’m looking for a Contextual Temporary Metric, rather than the One True Metric for Central 1 Product Development.

At Central 1, JIRA provides us with a Control Chart, which maps the cycle time for issues.  This is a powerful chart, and provides many tools to manipulate the data to gain insights.  However, it makes the base assumption that all issues are the same size.  One large story can severely change the rolling average for cycle time going forward.


A brief search gives some good ideas at the sprint level.

  • From Leading Agile, Number of Stories Delivered / Number of Stories Committed, and Number of Points Delivered / Number of Points Committed.
  • From Dzone, recent velocity / average velocity or recent throughput / average throughput.
  • From Velocity Partners, Number of stories delivered with zero bugs.

None of these considers predictability at the Release or Version level.  We already have pretty stable velocities in our teams, and when there are hiccoughs in velocity, it is for a known reason, like vacations.  So, I started looking at delivery versus commitment, which is, at the crux of predictability.  If a team can’t predict what they can deliver in the next two weeks, there is little hope that they will predict what they can deliver in a few months.

As I started to compile the data for stories and points in each sprint — something that is more difficult than I would like with Jira — I began to see that the teams go through large swings in the number of stories they might attempt, but the number of points stays relatively stable.  Meanwhile, stories are central to productivity, whereas predictability should include all work that the team undertakes, even if that work doesn’t move the product forward.

I therefore focused on the points delivered versus points committed, a number I call the Completion Rate.  The chart below illustrates the outcomes for three teams in the time since April.

point completion rate - raw

It is easy to foresee that a team might easily affect this metric by rushing through their work and leaking a lot of defects into the backlog.  A little further analysis shows that for some teams, like Team 12, this is indeed the case.  Teams 9 and 13 on the other hand leak relatively few defects into their backlog, as shown by comparing the light and dark solid lines in the chart below.

point completion rate

When a team marks a story (or defect) complete, while simultaneously finding a lot of defects, it makes it difficult to predict how long it will take to stabilize the release.  The outstanding effort for the release keeps growing, and the team must not only predict their velocity in addressing the defects, but the rate at which new defects will be found (hopefully one is higher than the other!).

I’m calling the value represented by the dark lines in the chart above the Done-Done Completion Rate:

Done-Done Completion Rate = (points completed – points leaked) / points committed

For the purpose of the analysis above, I used the actual estimated points for defects that were raised during the sprint.  However, in practice, those estimates probably don’t exist at the time of the sprint retrospective, when the team should want to review how they did.  In that case, I would use the average number of points per defect for those defects that haven’t been estimated.

What this metric doesn’t capture is the story that isn’t complete, and yet the team marks it as complete, while creating a new story for the remaining work.  I don’t know if a metric pulled from Jira could accurately detect this situation without penalizing agility; we want to be able to create new stories.

In the absence of a Release-level predictability metric, the Done-Done Completion Rate could help a team to see if they are moving in a direction that would enable them to predict their releases.


Tags: ,

2 Responses to “An Improved Agile Completion Rate Metric”

  1. Done-Done Completion Rate reveals interesting stories for agile teams | ReneGourley@work Says:

    […] Working in a Digital World « An Improved Agile Completion Rate Metric […]

  2. Improved Agile Release Burndown Metric Reveals More Stories about Teams | ReneGourley@work Says:

    […] Done-Done Completion Rate tells us some interesting information about how our agile teams are doing on a sprint-by-sprint […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: