I’ve found it helpful to keep a few fundamentals in mind when choosing and using metrics, and I want to share those in this post. Maybe you will find some of this useful.
Purposes and functions of metrics
To help me stay focused on the reasons to measure and the uses of metrics, I remind myself that metrics have two purposes and three functions. The purposes are:
- To steer work in progress
- To support continuous improvement efforts
Regardless of which purpose we are interested in with any given metric, it will perform one or more of the following functions:
- Informational function — provide plain information about progress, financials, risks, or other aspects of the work
- Diagnostic function — serve as an indicator when an aspect of the work deviates from expectations and might interfere with delivery
- Motivational function — influence people’s behavior, whether intentionally or inadvertently
Three dimensions of management
There are many variations in how software can be built and delivered. Accordingly, there are many choices for metrics that may or may not help us achieve the two purposes of measurement. We determine which metrics to use based on our operational context. One way of understanding our operational context is to use a model comprising three dimensions of management: the approach to the Iron Triangle, the process model, and the delivery mode. Different metrics may apply depending on where our organization stands on a spectrum of practice along each of these dimensions.
I list the three dimensions in this order for a reason. Variations in Iron Triangle management have the most profound effect on how the work flows. Variations in process model have a significant, but less profound effect. Variations in delivery mode can also have an effect. If we focus on, say, process model and we ignore Iron Triangle management, we will miss something.
I feel it necessary to emphasize this because I hear/read comments that indicate people underestimate the impact of Iron Triangle management on work flow. Also, the server statistics for this blog show that the post on process models is among the most-visited pages, while the post on Iron Triangle management receives only a moderate volume of visitors. That may mean that people consider process model to be more significant than Iron Triangle management. In my experience, when managers have had difficulty making sense of the numbers it has almost always boiled down to confusion about how the Iron Triangle is being managed in their environment. All this suggests that people generally underestimate the importance of Iron Triangle management for choosing metrics and for other elements of management. So I want to suggest that you consider this dimension of management first, before worrying about the details of the process model or delivery mode.
As you might imagine, delving into the operational context in detail can be a complicated exercise. It is easy to lose the forest amid the trees. We need to provide all the information necessary to support the business decisions each stakeholder must make. At the same time, we want to keep the quantity of information we report down to a minimum, to avoid causing a low signal-to-noise ratio in reporting. To help me keep my head in the right place, I like to refer to a couple of simple lists of principles.
Glen Alleman’s Five Irreducible Principles of Management
The first is a list of five basic questions that must be answered for any initiative. The list was created by management consultant Glen Alleman. He calls them the Five Irreducible Principles of Management (see the presentation slides, particularly slide 12, but read the whole thing for context). These five questions apply to any initiative regardless of the approach to the Iron Triangle, the process model, or the delivery mode in use. Whether responsibility for delivery lies with a steering committee, a designated manager, or a self-organizing team, the questions must be answered. Otherwise, you will have no idea what is going on. They are:
- Where are we going?
- How do we get there?
- Do we have enough time, resources, and money to get there?
- What impediments will we encounter along the way?
- How do we know we are making progress?
Metrics alone will not answer all these questions directly. However, it should be obvious that they apply directly to questions 3 and 5, and with a little thought you can probably see that metrics can help us with all five, both for purposes of steering the work and for guiding improvement efforts.
When I am considering whether a particular metric is relevant in a given context, I find it very useful to ask myself whether the metric will help me answer any of these questions. If not, then it’s not the right metric for the situation.
The Albert Einstein School of Management
The second list is a collection of quotes attributed to Albert Einstein. I call the list the Albert Einstein School of Management. The quotes are:
- A man should look for what is, and not for what he thinks should be.
- Insanity: doing the same thing over and over again and expecting different results.
- We cannot solve our problems with the same thinking we used when we created them.
- Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted.
- Any intelligent fool can make things bigger and more complex. It takes a touch of genius – and a lot of courage – to move in the opposite direction.
I’ve read that at least one of these quotes may have originated with Benjamin Franklin. I’m okay with that, because I don’t want to muddy the waters with historical accuracy. I just want to have a catchy list of principles that’s easy to remember for practical reasons.
Measure outcomes, not activity
A simple rule of thumb that I find helps me stay focused on practical metrics is just this: Measure outcomes, not activity. This might sound pretty obvious, but many managers don’t measure outcomes. They depend on observations of activity to tell them whether they are making progress. They might measure, for instance, the number of hours each team member bills to a project per week; or they might track whether a development team is using some particular programming technique. Measurements like these can only be secondary indicators, at best.
Observations of outcomes provide a more direct indication of progress and earlier warning of problems than observations of activity. By tracking, for instance, throughput, cycle time, and other measures of results, we can tell whether we are delivering according to stakeholder expectations. When those measures trend beyond our defined limits of acceptability, they provide early warning that we need to conduct a root cause analysis and take corrective action.
It is conceivable that we could answer the five questions based on measures of activity, but it’s likely the indicators will be a bit fuzzy and red flags will appear too late for us to take corrective action.