Posts Tagged ‘productivity metric’

Done-Done Completion Rate reveals interesting stories for agile teams

September 22, 2015

I wrote about the Done-Done Completion Rate a couple of weeks ago. Since then, I’ve plugged in some data from some of my teams, and revealed some interesting stories, which I would like to share today.

First, let’s look at Team 12.  They have a completion rate (points completed / points committed) that tends to vary between about 60% and 80% (light blue line).  It could be better, but 80% is not bad, considering you want to be reaching a little bit in each sprint.


However, Team 12 tends to leak quite a few bugs with each sprint.  They have produced estimates for many of these defects, and so, the team is aware that their backlog is growing almost as quickly as they burning it.  As far as they’re concerned, they have an effective completion rate ((points completed – points added) / points committed) that is lower (medium blue line).

There is, unfortunately, a hidden trap in the unestimated defects.  Allowing for these, and using an average number of points per defect, they even made negative progress for one sprint back in May.  The good news is that they appear to be improving, and since July, they appear to have kept their sprints much cleaner, and we can expect that they have a better handle on when they will complete their current milestone.

These unestimated defects can be the undoing of a team.  Consider for example, Team 7, which has a number of new members, and may still be forming.  About every third sprint, this team seems to fail to meet their commitments, and indeed lose substantial ground.  As a manager, it’s important to dig into the root cause for this.


Finally, here is a team that keeps their sprints very clean.  Through extensive automated testing, Team 5 allows almost no defects to escape development.  When they do, they estimate the defect right away.  Notice how their completion rate (light purple) is actually lower than Team 12 (blue above), but when we allow for defects, this team is completing more of their work.  The result is that this team can count on their velocity to predict their milestones, provided the scope doesn’t change.  Of the three teams, this one is most predictable.



How Productive was our Sprint? A Proposal

August 11, 2015

My search for a good productivity metric continues.  As mentioned, Isaac Montgomery suggests a metric for productivity that relies on releases.  Release 60% of the value of the product, and divide by the cost to acquire that value and you have a productivity metric.

This metric has a few nice features:

  • It doesn’t incur much overhead.  We already know the value of our projects and initiatives at Central 1, or we could come up with something with relatively little cost.
  • It encourages breaking projects into milestones and assigning value to those milestones.  Milestones matter at a macro reporting level: when we speak to our customers, it’s nice to be able to point to concrete artifacts that we have completed.
  • It is easy to normalize and compare across teams, or at least across time.  The individual teams would not be involved in assessing the overall value of their initiatives, and by centralizing we stand a hope of equalizing the value assigned across teams.  The alternative that I’m familiar with, value points, relies on teams or individual product owners assigning the same value points to the same story.

On the other hand, Montgomery’s metric doesn’t provide the rapid feedback that you would want if you were making small adjustments to your process.  In order to determine if you were more productive, you would need to pass several milestones, and that could easily mean a six-month lag between the time when the change is made and the effects are known.  It would be far better if this lag were only a few sprints.

What if we combine my story productivity metric with Montgomery’s metric?  It would work like this: during release planning, we divide the value of the project into releases or milestones, per Montgomery.  At this point, we have a story map, and we could say that if one of the releases were worth $100, say, and it had ten stories in it, then each of those stories is worth $10.  Go nuts!  The challenge with this is that I know that the number of stories in a release grows as we get into it.  Those first few stories were good conceptually, but missed a lot of the nuances, and responding to those nuances is what agile is all about.

To allow for this, we could assign a part of the residual value to the new stories that are added after the first sprint.  In the example, if we produced one story in the first sprint (10%), there are 90% of the value left and 9 stories.  If we then add a new story, each of the ten outstanding stories is worth (90%/10) 9%.  Eventually, we stop adding stories, and the team completes the remaining ones so we can complete the release.


Based on this narrative, I would expect productivity to follow a 1/x type of curve over time, eventually stabilizing for the release.  I shall be interested to see how it pans out with some actual numbers from our teams.