What’s in a name?

April 6, 2022

Fantasy novel magicians and software developers both know how important names are. Once you know someone’s True Name, you gain tremendous power over them and can make them do anything you want – spin straw into gold or sing “O Canada” backwards. Software developers are so invested in names that we spend 84% of our time thinking of great labels for variables, functions, classes, components, tables, collections and foil balls (the other 16% is spent changing the state of bugs in Jira). The name tells the reader what to expect out of the entity just conjured.

The importance of the name grows with the scope and longevity of the name. Thus a variable might have as simple a name as i (which had better be an iterator or the ghost of Dennis Ritchie will haunt you forever) while the name of a component will be more descriptive, like NationalAnthemReverser. A descriptive name not only helps developers predict how the component behaves now, but also how it should change in the future.

Despite the industry having finally realized the relationship between team structure and architecture, we still struggle with team names. I once had 17 feature teams reporting into me that were descriptively named 0 through 16; they were each in turn part of larger teams that were named after colours. I wished briefly that we had chosen butch cats for the larger teams, but team names were quickly encoded in surprising places, and may as well have been written in stone by the time I took to cat fancy. We had a huge chart to describe the components and features that each team was responsible for, and even so, there were always tasks that fell on boundaries or outside anyone’s accountability. It seemed the more we tried to define the teams, the bigger the gaps between them became.

With poor team names, it was impossible to reason about them, and a baffling mystery to a new layer of senior management when they started. Frequently, we found it difficult to decide where a boundary task should go. The scope of responsibility for a team was a cohesive list of components or features, but there was no guide indicating why a new component or feature should be added to one list rather than another.

My preference today is to name teams after business goals. That usually aligns with the product management structure, and makes it clear what should be important to the team. Tasks still land on the boundary all the time, but there is a clear signpost to provide direction off the boundary. With goal-aligned team names, other parts of the organization can imagine what a team might do, and the teams become much easier to reason about.

It’s still not perfect, but unless they’re hockey stars, anything is better than calling your team “Panthers.”


Photo Credit: Jack Dorsey on Flickr original photo here.

9 Ps for more Meaningful Meetings

January 1, 2022

You’ve seen this happen before. You’re in the weekly meeting and Manpreet mentions an issue, and Peter says, “we should have a meeting about that,” and like a fool you offer to organize it because the issue seems to be in your wheelhouse, and Dan, Edmund, Patty, Jason, Jessica and Kelly are interested in the issue for various reasons and all pipe in that they would like to join the meeting, and before you know it, you’re booking an hour-long meeting with the full cast from The Dark Knight, and the first time you can get all 239 of them together is sometime next November, and when that day finally does roll around everyone who hasn’t left the company squeezes into the conference room to puzzle over the cryptic one-line subject for the meeting, and finally Ivana remembers what it was all about but notices that Jessica is not there and really should be, and so you chat amiably about the issue until you all disperse to attend your next meeting, which you know will be equally ineffective because your calendar is full of such things. Is it any wonder that when we have a day full of meetings, we feel like we get no work done?

Some of the more diligent among you will have noticed that the meeting should have had an agenda, but while necessary, an agenda is not sufficient to ensure you aren’t wasting your time. Productive meetings start with Purpose.

Purpose

Without a clear purpose, is it any wonder meetings circle around and around without coming to any conclusions? How can you, the convener of the meeting, keep a meeting on track if you don’t know what you were shooting for in the first place? The Purpose of the meeting is the change that you want to achieve over the course of the meeting. A good way to know if you have a good Purpose is to construct a phrase that starts with “At the end of this meeting…”

  • At the end of this meeting we will have decided what to do about X.
  • At the end of this meeting we will have a new design for Y.
  • At the end of this meeting everyone will know about Z.

You’ll know you’re on the right track if it feels reasonable to have invested in the Purpose. For me, “At the end of this meeting, we will have discussed the issue, but will not have reached any conclusions” is a poor investment of time; worse, because it is so vague, “discussion” is exactly the type of meeting that invites everyone to pile in. On the other hand, many stakeholders will self-select out of a meeting that will “decide the path forward” because they only need to be informed, and even more of them will run for the hills if the purpose is to “design the solution.”

Only once you know the Purpose, can you identify the Process by which you might achieve it.

Process and Preparation

Old-fashioned types would call this an Agenda, but that is really just a list of things to do, and besides, alliterations are awesome! The word “Process” on the other hand denotes a sequence of steps we will follow to achieve the Purpose of the meeting.

What are the steps that you see for achieving the Purpose while also satisfying your company’s needs for inclusion (or not) in decision-making? There should be a clear progression from one step to the next, and the last step should be clearly related to the Purpose. You might even suggest a timing for the steps so you know if the meeting is running as planned or not.

Note, that while it’s a poor Purpose, “discussion” is a perfectly reasonable step in the Process. For example, you might come to a decision by identifying the options, discussing the pros and cons of each, and deciding based on that discussion.

Oftentimes it is easy to identify a step that could benefit from some Preparation before the meeting. For example, someone could assemble some data to inform the discussion and decision-making process. You should check with them before committing them to prepare specific resources for the meeting, but identify them and the Preparation they have agreed to in the meeting invite.

Once you know the Purpose and Process, it is often trivial to identify the People who should attend. Without knowing the Purpose and Process, you have no business asking for their time.

People

If the attendees for a meeting aren’t immediately obvious, look back over that Process and consider who you anticipate will contribute to each of the steps. Is there someone with specific background that would help with context-setting? Is there skill required in designing the solution? Who is ultimately accountable for the decision or the outcome?

Note that this list could still be a long lineup of stakeholders who have a legitimate reason to be in the meeting. If so, it is good to reflect back on the process and consider whether it is viable with that many people stretched over all those geographies. If not, you will need a new Process to achieve the Purpose; you may even want to split the meeting into multiple meetings.

Place, Period, Provisions

Once you’ve identified the People and the Preparations they might have to make, you’re ready to focus on the more prosaic aspects. The number and locations of the People will identify the options for Places where the meeting can happen; their calendars and their ability to Prepare will constrict the time Period over which the meeting will occur. Finally, if the only available Period is over a meal time, you should make clear if you will be providing any food or Provisions.

Preamble and ‘Pologies

All of the above goes into a meeting request. When I see it laid out like that, I quite often realize that some attendees will need just a little background or motivation for why we need to achieve the stated purpose. I set that aside in a Preamble.

Finally, if I’m double-booking someone, or asking them to get up early in the morning or stay late at night, I like to apologize, even if I’ve confirmed with them offline. Sadly, “Apology” doesn’t start with a ‘P’, but after eight Ps, there’s no stopping this train! I make no apology for the abbreviation!


A note on names: if you recognize yours, chances are I really was thinking of you. I hope you are more flattered than offended.

Photo credit: Rikard Wallin on Flickr

The Attaboy Game

January 7, 2018

This week, one of my managers found a sheet of gold stars as he was organizing his desk. He reminded me of the Attaboy Game that we played nine years ago.

When I first joined Central 1, the annual employee engagement survey had just come out. One of the findings was that we needed to do better at recognizing good behaviour or performance.

As a simple and fun change, we introduced a contest among my managers. I gave each manager some sheets of gold stars. When they spotted someone doing something good, they would present them with the gold star and tell them why. That person would then have to pass the star on to me and explain why they had received it. I stuck the stars to a scoreboard and at the end of the game awarded a coffee card or something to the winning manager.

All my managers gamely participated, and real competition emerged between a couple of them. In the end we were all winners.

Improved Agile Release Burndown Metric Reveals More Stories about Teams

September 28, 2015

The Done-Done Completion Rate tells us some interesting information about how our agile teams are doing on a sprint-by-sprint basis.  However, it doesn’t help us to understand whether they will actually hit their next milestone.  The trouble is, the current version that they’re working on may still not have all its scope defined.  That’s all agile and fine and everything, but the reality is, sometimes we want to know if we’re going to hit a date or not.

As mentioned previously, here at Central 1, we use JIRA to manage our agile backlogs.  This tool comes with a handy release burndown chart, which shows how a team is progressing against their release goal.  For example, the chart below illustrates a team who started their version before adding any stories to it.  However, once started, they burned a portion of the release with each sprint, eventually leading to a successful release.  In sprint 27, they added a little scope, effectively moving the goalpost.

ReleaseBurndown

The trouble with this chart is that it supposes that the team is planning their releases (versions in JIRA).  What about teams that have multiple concurrent releases, or those that aren’t really using releases at all.  Are the teams leaking defects into the backlog?  Are they expanding the scope of the current release?

In order to answer these questions, we need to include the unversioned backlog.  I’m considering a metric that I have given the catchy moniker, “Net Burndown for the Most Active Committed Release.”  This starts out with the chart above for the release upon which the team is making the most progress.  Any change in any other releases is ignored, so a team could be planning the next release without affecting their net burndown.  However, if they leak stories, defects or tasks into the backlog without associating them with a release, those items are assumed to be in the current release and included in the net burndown.  Sometimes that’s bad, and sometimes it’s good.  Finally, any unestimated items are given the average number of points for items of that type.

Here is how the chart looks for one team.  This team is somewhat disciplined about using versions, but as discussed before, they leak quite a few defects, and may or may not assign them to a version.  In the chart, the blue area represents the same information as the JIRA release burndown chart.  The red area adds in the backlog outside the release, and you can see that it tends to dominate the blue for this team.  Finally, the green area represents all the unestimated issues (mostly bugs).

netBurndown12

In some cases, like August 12th, the negative progress in the backlog (about -50 points) dominates what appears to have been pretty good progress in the release (about 50 points).  The unestimated issues (about -30 points) leaked onto the backlog make the story even bleaker, and we can see that the team is not making substantial progress toward their release after all.

Contrast this team with a second team, who take a more disciplined approach overall.  The team did some sort of a huge change in their versions in June, which cuts off the chart.  However, since then, we can see that the team leaves very few unestimated issues, and tends to assign issues to releases, rather than working on the backlog directly.  They’re not perfect, however, and struggle occasionally to maintain their velocity; comparing with their Done-Done Completion chart, we can see that this was actually a struggle, as opposed to a planned dip for vacations.  They also seem to be letting go of the versions a little in more recent sprints.

netBurndown5

Done-Done Completion Rate reveals interesting stories for agile teams

September 22, 2015

I wrote about the Done-Done Completion Rate a couple of weeks ago. Since then, I’ve plugged in some data from some of my teams, and revealed some interesting stories, which I would like to share today.

First, let’s look at Team 12.  They have a completion rate (points completed / points committed) that tends to vary between about 60% and 80% (light blue line).  It could be better, but 80% is not bad, considering you want to be reaching a little bit in each sprint.

poinCompletion12

However, Team 12 tends to leak quite a few bugs with each sprint.  They have produced estimates for many of these defects, and so, the team is aware that their backlog is growing almost as quickly as they burning it.  As far as they’re concerned, they have an effective completion rate ((points completed – points added) / points committed) that is lower (medium blue line).

There is, unfortunately, a hidden trap in the unestimated defects.  Allowing for these, and using an average number of points per defect, they even made negative progress for one sprint back in May.  The good news is that they appear to be improving, and since July, they appear to have kept their sprints much cleaner, and we can expect that they have a better handle on when they will complete their current milestone.

These unestimated defects can be the undoing of a team.  Consider for example, Team 7, which has a number of new members, and may still be forming.  About every third sprint, this team seems to fail to meet their commitments, and indeed lose substantial ground.  As a manager, it’s important to dig into the root cause for this.

poinCompletion7

Finally, here is a team that keeps their sprints very clean.  Through extensive automated testing, Team 5 allows almost no defects to escape development.  When they do, they estimate the defect right away.  Notice how their completion rate (light purple) is actually lower than Team 12 (blue above), but when we allow for defects, this team is completing more of their work.  The result is that this team can count on their velocity to predict their milestones, provided the scope doesn’t change.  Of the three teams, this one is most predictable.

poinCompletion5

An Improved Agile Completion Rate Metric

September 1, 2015

As mentioned in my last post, we’re changing our focus from productivity to predictability until we can actually predict how long releases are going to take.  I still believe that we need a single true metric for productivity, but until we have some predictability, our productivity numbers are too shaky to provide any guidance to teams as they look to improve.  I’m looking for a Contextual Temporary Metric, rather than the One True Metric for Central 1 Product Development.

At Central 1, JIRA provides us with a Control Chart, which maps the cycle time for issues.  This is a powerful chart, and provides many tools to manipulate the data to gain insights.  However, it makes the base assumption that all issues are the same size.  One large story can severely change the rolling average for cycle time going forward.

controlChart

A brief search gives some good ideas at the sprint level.

  • From Leading Agile, Number of Stories Delivered / Number of Stories Committed, and Number of Points Delivered / Number of Points Committed.
  • From Dzone, recent velocity / average velocity or recent throughput / average throughput.
  • From Velocity Partners, Number of stories delivered with zero bugs.

None of these considers predictability at the Release or Version level.  We already have pretty stable velocities in our teams, and when there are hiccoughs in velocity, it is for a known reason, like vacations.  So, I started looking at delivery versus commitment, which is, at the crux of predictability.  If a team can’t predict what they can deliver in the next two weeks, there is little hope that they will predict what they can deliver in a few months.

As I started to compile the data for stories and points in each sprint — something that is more difficult than I would like with Jira — I began to see that the teams go through large swings in the number of stories they might attempt, but the number of points stays relatively stable.  Meanwhile, stories are central to productivity, whereas predictability should include all work that the team undertakes, even if that work doesn’t move the product forward.

I therefore focused on the points delivered versus points committed, a number I call the Completion Rate.  The chart below illustrates the outcomes for three teams in the time since April.

point completion rate - raw

It is easy to foresee that a team might easily affect this metric by rushing through their work and leaking a lot of defects into the backlog.  A little further analysis shows that for some teams, like Team 12, this is indeed the case.  Teams 9 and 13 on the other hand leak relatively few defects into their backlog, as shown by comparing the light and dark solid lines in the chart below.

point completion rate

When a team marks a story (or defect) complete, while simultaneously finding a lot of defects, it makes it difficult to predict how long it will take to stabilize the release.  The outstanding effort for the release keeps growing, and the team must not only predict their velocity in addressing the defects, but the rate at which new defects will be found (hopefully one is higher than the other!).

I’m calling the value represented by the dark lines in the chart above the Done-Done Completion Rate:

Done-Done Completion Rate = (points completed – points leaked) / points committed

For the purpose of the analysis above, I used the actual estimated points for defects that were raised during the sprint.  However, in practice, those estimates probably don’t exist at the time of the sprint retrospective, when the team should want to review how they did.  In that case, I would use the average number of points per defect for those defects that haven’t been estimated.

What this metric doesn’t capture is the story that isn’t complete, and yet the team marks it as complete, while creating a new story for the remaining work.  I don’t know if a metric pulled from Jira could accurately detect this situation without penalizing agility; we want to be able to create new stories.

In the absence of a Release-level predictability metric, the Done-Done Completion Rate could help a team to see if they are moving in a direction that would enable them to predict their releases.

Look for Predictability before Productivity for Agile Teams

August 15, 2015

Last week I proposed a productivity metric based on the proportion of a release that has been completed, accounting for story creep in the release.  Not all of our teams are using the version feature in Jira, but the several that are enabled me to perform a little analysis.  So far, I don’t think we have the maturity level required to make this metric work for us, but the data are instructive all the same.

Two teams in particular stand out.  Team 13 was our inital pilot team on agile, and they’ve been practising for about three years now.  Some of their releases look pretty close to what I expected from a productivity standpoint.   Here, for example is a five-sprint release that shows a ramp down at the end.  The last sprint only contributed small productivity because there was only one story left at the end to wrap up.

Team13R3.6

This made me wonder what was going on with their releases, and so, I mapped all of them, and I found a much more chaotic chart.  The chart below shows the raw number of stories completed in each sprint.  The different colours denote different releases (light blue stories have no release).

Team13Stories

Now team 13 actually has a legitimate reason to work on multiple concurrent releases: they manage change to multiple pieces of software.  So, my model would have to accumulate value produced across concurrent releases.  This analysis currently takes a certain amount of Excelling, and so, I looked at Team 12, which has only three releases:

Team12Releases

This is a chart of the proposed productivity metric for three releases.  As can be seen, each release kicked off with a killer sprint in which the team produced over ten per cent of the stories initially defined for the release.  Then a combination of deferred stories and probably bug-fixing killed them after that.  The result is a metric that is too unstable to provide feedback to the team.

Reviewing the shape of their velocity chart confirms the story and also raises some more questions and concerns.  In particular, this team’s velocity is too unstable to make for safe predictions.  So, while this productivity metric might be relevant for long-established teams with stable velocities, I suspect the majority of our teams need to concentrate on predictability first.

A Mobile Usability Testing Filming Rig

August 12, 2015

Yesterday, a couple of my interaction design folks came to my office with a webcam and a cheap light from Ikea.  They had had the brilliant idea of mounting the webcam on the light so they could film usability testing on our mobile app.  The masking tape version they had assembled worked fine for internal testing, but tomorrow they’re heading to a branch at VanCity to test with real members, and they wanted something a little more professional-looking.

It turns out this Ikea lamp is made to be hacked with the niceEshop webcam.  All we had to do was take the reflector out, along with the socket, switch and bulb.  Then it was easy to thread the webcam wire through the hole where the lamp switch had been. The webcam wire has a little rheostat along its length to adjust the light brightness, and this needed to be taken apart and reassembled to make it through the hole in the lamp.

I took the reflector home last night to expose it to my hack saw and Dremel tool for fifteen minutes to get rid of the parabolic part of the reflector and to make a place where we can reach the camera on-off button.  Then this morning, I re-installed the reflector with some Blu-Tak to keep the camera from moving around.  If I wanted to be professional about it, I might have used some black silicone, but nobody will see the Blu-Tak anyway.

Don’t get me wrong, I love managing a team of developers, designers and testers.  But occasionally I get to play MacGyver, and that is really fun.

How Productive was our Sprint? A Proposal

August 11, 2015

My search for a good productivity metric continues.  As mentioned, Isaac Montgomery suggests a metric for productivity that relies on releases.  Release 60% of the value of the product, and divide by the cost to acquire that value and you have a productivity metric.

This metric has a few nice features:

  • It doesn’t incur much overhead.  We already know the value of our projects and initiatives at Central 1, or we could come up with something with relatively little cost.
  • It encourages breaking projects into milestones and assigning value to those milestones.  Milestones matter at a macro reporting level: when we speak to our customers, it’s nice to be able to point to concrete artifacts that we have completed.
  • It is easy to normalize and compare across teams, or at least across time.  The individual teams would not be involved in assessing the overall value of their initiatives, and by centralizing we stand a hope of equalizing the value assigned across teams.  The alternative that I’m familiar with, value points, relies on teams or individual product owners assigning the same value points to the same story.

On the other hand, Montgomery’s metric doesn’t provide the rapid feedback that you would want if you were making small adjustments to your process.  In order to determine if you were more productive, you would need to pass several milestones, and that could easily mean a six-month lag between the time when the change is made and the effects are known.  It would be far better if this lag were only a few sprints.

What if we combine my story productivity metric with Montgomery’s metric?  It would work like this: during release planning, we divide the value of the project into releases or milestones, per Montgomery.  At this point, we have a story map, and we could say that if one of the releases were worth $100, say, and it had ten stories in it, then each of those stories is worth $10.  Go nuts!  The challenge with this is that I know that the number of stories in a release grows as we get into it.  Those first few stories were good conceptually, but missed a lot of the nuances, and responding to those nuances is what agile is all about.

To allow for this, we could assign a part of the residual value to the new stories that are added after the first sprint.  In the example, if we produced one story in the first sprint (10%), there are 90% of the value left and 9 stories.  If we then add a new story, each of the ten outstanding stories is worth (90%/10) 9%.  Eventually, we stop adding stories, and the team completes the remaining ones so we can complete the release.

theoretical

Based on this narrative, I would expect productivity to follow a 1/x type of curve over time, eventually stabilizing for the release.  I shall be interested to see how it pans out with some actual numbers from our teams.

Measuring Productivity Using Stories

July 3, 2015

About a month ago, I attended some training on leading high performance teams. There I learned that a single well-defined metric that is perfectly aligned with the team’s performance can help to ignite their performance.  Among other things, this reignited my interest in actually measuring the productivity of my agile teams.

Despite many claims that productivity metrics are a fools errand (McAllister, Fowler, Hodges), I’ve been trying to measure it for years, at least ever since I came to Central 1, and possibly before. Without measuring productivity, the many easily grasped quality metrics are unbalanced, and the team can find themselves in constant-improvement mode, without actually producing anything new.  Without measuring productivity, how do we know that we are being strangled by technical debt?

For several years, I used the number of changes per hour of development. This got better when we jumped into agile with two feet back in 2014; prior to that, there was too much variability in the size of a JIRA ticket – they might represent a small bug fix or a whole project.  By the end of 2014, we were looking solely at the number of stories developed per hour (the reciprocal, hours per story, is more intuitive, but early on we sometimes spent time without producing any stories).

I was often asked why stories instead of story points.  The reason was value.  A very complex story would have a high number of story points, but might have little business value.  Stories, on the other hand, should be the smallest unit of work that can still deliver business value – a quantum of business value.

This metric was pretty good.  It had the immediate benefit of being cheap to produce – simply query JIRA and divide by time.  Moreover, the chart showed a beautiful increase of “productivity” as the team got used to working with agile.

productivity

 

But then a funny thing happened.  We were working on the new mobile branch ATM locator, and the project was producing stories just fine, but it was never concluding.  The problem was in the nature of the stories.  Instead of meaty stories like,

As a user I would like to search for the closest ATM so that I can go get money.

many of them were more like

As a user, I would like the search box to be titled “search” so that I know where to put my query.

Clearly, not all stories are created equal.  I don’t think the team was deliberately gaming the system (there was no benefit if they did) and small stories are a hallmark of a healthy agile team, but surely we cannot ascribe the same level of productivity to a team that is changing the value of a label as one that is enabling search. More to the point, the team was not completing the project!

I feel that the unevenness in story value probably averages out over a sufficiently large team.  However, measuring over the larger team has little benefit in terms of motivating a single agile team.  Across the larger development organization (Central 1 has about 50 developers and 20 testers), we might expect to see an effect if we make a change for everyone, as we did when we moved to adopt agile in early 2014.  However, because the values are not steady, it takes four to six months to be sure of a trend.  On the other hand, it is very difficult to dissect what is happening if no change has occurred, but a trend is detected anyway.

Looking forward, there is a promising-looking blog post from Isaac Montgomery at Rally.  It has the benefit of measuring true productivity, but requires valuation for initiatives, which at Central 1 would be difficult.