Posted on

Paranoiac-critical project tracking

A recent tweet of mine in which I whined about the waste of tracking work hours garnered quite a few responses. The reaction surprised me, as I was only venting and didn’t expect any replies. Apparently, I’m not the only one who is frustrated by this. Not only is it useless to track individuals’ hours by task or by project, it is particularly annoying to use poorly-designed time tracking tools. I was complaining because I presently have to enter time in three different tracking systems. I wrote that the first seems to have been designed by a roomful of monkeys; the second, by a roomful of monkeys on crack; and the third, by a roomful of monkeys on heroin laced with rat poison.

The ensuing Twitter thread reminded me of the futility and waste of tracking individuals’ time at all. People shared their own experiences with the problem and how they had dealt with it. Later, I recalled having written a blog post on a related topic. I visited the Wayback Machine and located it. Here’s the post, dating from December 29, 2007. It lacks only the image of the Dalí painting Suburbs of the Paranoiac-Critical City; Afternoon on the Outskirts of European History, which I hesitate to reproduce in view of recent…er, paranoia about intellectual property rights.

Salvador Dalí coined the term paranoiac-critical method in the essay, "The Stinking Ass," published in the 1930 book, La Femme Visible. (Both titles are borrowed from paintings of his.) Dalí, among other surrealists, took an interest in the fact that paranoiacs perceived connections between things that most people did not. In fact, paranoiacs perceived connections that were not valid, and cited the perceived connections as proof of their own delusions. This facet of paranoia became a basis of the paranoiac-critical method, in which artists sought to induce illogical and surprising connections in the minds of people who viewed their paintings.

In my career I've observed what might be called three "ages" of software development process evolution, which I like to call The Age of the Waterfall, the Age of the Unified Process, and the Age of Agility and Leanness. Prior to the Age of the Unified Process, customers of software development projects never received any concrete results from the projects they funded until the very end of the development cycle. Months or years might pass before the customer saw any of the functionality they had requested. Even with the Unified Process, it was typical for customers to wait months before they saw any interim results. Naturally, customers wanted some assurance that the development team was actually doing something with their money (other than rolling joints with it).

Since incremental delivery of working software was out of the question, customers had to settle for indirect signs of progress. The only measurable elements in a software development project were money, time, and task completion. All these measures were only notations on a report, possibly presented in a graphical format. None was a real result; none could be loaded and executed on a computer. All were gameable and all were gamed, all the time.

Money was usually tracked as the budget burn rate. If the project was burning money at the predicted rate, it was interpreted as an indirect indication that things were moving along as expected. Therefore, the burn rate was always reported as perfectly smooth and on track. Time was usually tracked as a comparison of estimated time with actual time spent per task. Since programmers were evaluated on the basis of how "accurate" their task estimates were, actual time was always reported as very close to estimated time. Task completion was usually reported as a percentage; that is, "50% complete" meant that half the work necessary to complete the task had been done, and half remained. Therefore, task completion was calculated directly from the calendar. It all looked so reassuringly normal, and those colorful graphs were so beautiful, one could weep.

The paranoiac-critical method has been interpreted in the context of semiotics. Since the indirect measures of progress are signs, we can extrapolate a semiological interpretation of the paranoiac-critical method to software development project tracking on the basis of perceived connections between measures of money, time, and task completion with actual progress toward the production of valuable software.

In other words, it's plainly delusional to think that a report showing a smooth burndown of the project budget, or the fact that developers have indeed spent 8 hours a day for the past year working on the project, or that 50% through the project schedule we've completed (you guessed it) exactly 50% of the work, could have any real connection with progress toward the project's goals. It means nothing. Nothing at all. Managers who track progress in this way might as well be looking for the shapes of bunnies and horses in cloud formations.

In this Third Age of software development process, we know that the only true measure of progress for a software development project is working software, and so the most basic measure of progress is running tested features. We know that task estimation is not an attempt to predict the future — our professional goal is not to become "better" at estimating — it's just a short-term planning tool to help us determine how much work will fit into an iteration. We know that if a pair has already taken too long to complete a story, the last thing in the world they can possibly tell us is how much longer the story will take; they've already gone out of bounds of their estimate, and if they knew why they would have already completed the story.

At least, we say we know these things. But those clouds are awfully pretty, and if you squint just right and suspend disbelief, you really can see bunnies and horses. No, really. You can. They're so beautiful, one could weep.

People use a variety of measures to track progress on agile development projects. For some reason, there continues to be an emphasis on tracking time and on comparing time estimates with actuals. Popular agile project management tools like VersionOne and Rally explicitly track progress within iterations by burning down progress on individual tasks. The raw data on which the iteration burndowns are based is, simply, the developers' own frequently-updated estimates of how much time remains to complete each task. Even some of the leading books on agile project management recommend this practice, including Craig Larman's Agile & Iterative Development: A Manager's Guide, and Dean Leffingwell's Scaling Software Agility: Best Practices for Large Enterprises.

So, what's wrong with that? If big-time authors and tool vendors like it, it must be good, right?

Since customers now receive incremental results they can deploy to production or to a staging environment and use with their own hands, they no longer need to gaze at clouds to look for indirect signs of progress. Each project metric has a particular audience, and each audience needs certain information to do its job. Metrics whose purpose were to give an indirect indication of progress are no longer necessary. They are waste. They are Type 2 muda — the type of waste that can be eliminated with a wave of the hand.

Measures such as running tested features, earned value, and velocity are interesting to customers. They provide a real indication of progress in between deliveries of incremental results. They can serve as red flags when the project isn't progressing well. But tracking within a single iteration is not done for the benefit of customers; it is done for the benefit of the team itself, to help them understand what to do next, and whether they are in danger of missing their commitments for the iteration. Carefully plotting a burndown of task progress against estimates, day by day, is Type 2 muda.

Estimates of time remaining on individual tasks will never be accurate. The time team members spend trying to supply those numbers to the project tracking tool is time wasted. Agile teams that are really applying the practices will have more immediate ways to know when thing's aren't going well. If your team isn't aware of what's going on, you've got deeper problems than will be solved by tracking estimated time.

Stop the waste! Track everything necessary, but nothing more. <sigh> One could weep.