Posted on

Short-term planning: A continuum of effective practice

There’s been an ongoing discussion in IT circles about estimation. The discussion dates back as far as I can remember in my career. As far as I know, it began much earlier than that; possibly around the time people began to apply software to serious matters, which would have been a few minutes after the first digital computer was powered up.

People have focused on estimation for such a long time that I sometimes wonder if they have lost sight of the purpose of software. The practical value of software is not realized through an estimate. An estimate is not a product. Customers don’t buy estimates. Even when a client pays a consultant to provide an estimate for a project, the estimate itself is not the thing that ultimately interests the client. It is only a step toward making a decision about whether and how to go about a proposed initiative. An estimate is a means to an end, not an end in itself.

What’s the end? It depends on the purpose and scope of the estimate.

At one level, high-level estimates are used to help people make decisions about whether it’s worth investing in a software development initiative. At that level, estimates are critical to success, and a number of robust, formal estimation methods are available to support those decisions.

At another level, fine-grained estimates of software development tasks are used to inform short-term planning throughout a project. That is the level of estimates I’m interested in exploring in this post.

Well, more to the point, I’m interested in exploring a range of practical ways to inform short-term planning during software development initiatives. Estimates are not the only way.

If we understand that estimation is a non-value-add activity as that concept is understood in the Lean school of thought, then we will seek ways to support our short-term planning needs that minimize the overhead of predicting task durations. To come up with a sort of spectrum or continuum of practice around short-term planning, let’s peg the two ends of the spectrum in some practical way.

I will suggest that one end of the spectrum is marked by the Old School approach of developing fine-grained task estimates in terms of clock time by thoroughly analyzing each task in advance, before any feedback has been gained from early development results. This represents the maximum amount of non-value-add overhead work possible to provide information useful in short-term planning. It also results in the least meaningful estimates imaginable (that is, if we use my imagination; and that’s the one I’m using at the moment).

Now I will suggest the other end of the spectrum is marked by the lightest-weight software delivery process I know of as of today, a method known as Naked Planning. In case you’re not familiar with Naked Planning, there’s a nice summary on the Pivotal Labs blog. They’ve also included links to an interview with Arlo Belshee and a video that describes the planning board. Arlo created Naked Planning at a software start-up incubator where time-to-market was the key business driver – six weeks from concept to cash, and I mean real cash for a real sale. He later adapted the method to a corporate IT environment.

For purposes of this exploration, I’m going to set a baseline assumption that the Old School, bottom-up, analysis-heavy, time-based estimation method represents the least mature approach to the problem, and Naked Planning represents the most mature approach. I’m basing that on the relative amount of overhead work necessary to support the two approaches. Between the two are all the various approaches that people usually argue about online and in meatspace.

On a continuum of effective practice, I will judge a method to be more mature than another method if it requires less non-value-add overhead work than the other method to achieve the goal: Just enough information to support practical short-term planning.

As I see it, the following list represents a progression from immature to mature approaches to this problem:

  1. time-based estimates based on clock time, assuming all individuals in a given role perform identically (Old School)
  2. time-based estimates based on clock time, assuming a particular individual performs the work
  3. time-based estimates based on clock time with a fudge factor in case a different individual performs the work
  4. relative sizing with sizes pegged to clock time
  5. time-based estimates based on ideal time, assuming a particular individual does the work
  6. time-based estimates based on ideal time with a fudge factor in case a different individual performs the work
  7. relative sizing with sizes pegged to ideal time
  8. relative sizing of stories with tasks broken out and estimated in terms of clock time
  9. relative sizing of stories with tasks broken out and estimated in terms of ideal time
  10. relative sizing of stories with no explicit decomposition into tasks
  11. prioritized list with no sizing (Naked Planning)

Approaches near the top of the list involve the greatest amount of overhead work and result in the least meaningful information for purposes of short-term planning. As we progress through the list, each approach involves a bit less overhead work and provides slightly better information to support short-term planning.

If there is any merit to this model, then why aren’t all software development teams already using the lightest-weight approach possible? Do they just love spending time on non-value-add overhead work? No. In my experience it’s because not all software development teams are in a position to do so. There are prerequisites to being able to move to a lighter-weight method. If the prerequisites are not in place, then the lighter-weight method won’t work in context.

So, what’s all this noise about prerequisites? Don’t we already know how to deliver software?

Sure, we all know a way to deliver software. But wouldn’t it be nice if we could reduce the proportion of overhead work we do, thus creating relatively more time for value-add work? To do so, we might have to learn to do other things differently, besides just “estimation” as such.

For example, you can’t stop decomposing stories into tasks if your stories vary widely in size; you need the granularity of the tasks to give you a reasonably consistent basis for planning. So, if we can learn to craft stories that are small and relatively uniform in size, we can drop the overhead of decomposing them into tasks.

You can’t use plain relative sizing if the team members are unable to separate the notion of size from the notion of time in their minds; all you would end up with is two representations of time. If we can learn to separate the notion of time from the notion of size, we can drop the overhead of figuring out time-based estimates and move to the relatively quick and easy gut-feel relative sizing approach.

You can’t adopt a general concept of ideal time if people still associate estimates with the particular individuals who will perform each task; you would have no team-wide “load factor” to apply to generalized estimates. If we learn to take collective ownership of the work and to pursue the generalizing specialist model in our professional growth, we can forget about who will perform each task in plan, and how that might change the task durations; instead, “the team” is assigned to every task.

Things become simpler. I like simple things.