This morning I noticed a copy of the good old Cone of Uncertainty on the wall of a cubicle at a client’s office. It reminded me of Laurent Bossavit’s ebook-in-progress, The Leprechauns of Software Engineering, in which he challenges several of the myths or assumptions that have become traditional lore in the software industry, including that one.
Laurent writes,
What is odd, to any experienced software engineer, is that the Cone diagram is symmetrical, meaning that it is equally possible for an early estimate to be an over-estimate or an underestimate. This does not really square with widespread experience of software projects, which are much more often late than they are early.
Er…wait a second. Isn’t that very assertion an example of “anecdotal evidence?” Is it another Leprechaun? Who says projects are more often late than early?
Silly Dave! Everyone knows who said that. It was the Standish Group, in their famous Chaos Report of 1994. Slightly more than 70% of IT projects fail. That’s why every office building in the industrialized world is on fire, and white-collar refugees are streaming from the cities, leaving their belongings behind except for whatever personal electronic devices they can carry.
Only…they aren’t.
To the contrary, companies seem to be doing okay with their IT initiatives. Enthusiasts of one software development approach or another are quick to say Everyone Else is “failing,” (as a way of creating a sense of urgency about adopting their Favorite Thing, I guess), but Everyone Else isn’t too sure who they’re talking about. If anything (and I’ll risk tossing out a bit of anecdotal evidence myself, here), larger companies can get away with being pretty inefficient.
With size, apparently, comes a steady flow of revenue, no matter what (almost) a company does. In large organizations that shuffle information around instead of building things, nearly every moment of every day is spent waiting for someone else to tick a checkbox on an electronic form or to sign off on a document. Most information workers have a large queue of outstanding requests for checkbox ticks and document approvals. They are very busy waiting for each other, and they can prove it because they charge all this time to specific cost buckets.
The proportion of time spent adding value to a product varies inversely with company size. And yet, it seems to make no difference at all to the bottom line. My observation is that once an information-shuffling company grows beyond a certain threshold (perhaps 30,000 people, give or take 5,000), they have to try really, really hard to “fail.” That doesn’t mean they never have any problems, but failure is final. They might have a process cycle efficiency in the IT function of under 2%, and it has no effect on their business success. The bit about the burning buildings and columns of escaping white-collar refugees, or lack thereof, suggests that this is not a problem. There is no crisis.
Sure, they’re wasting a lot of time and money, and the quality of working life for the technical staff isn’t the best, but they aren’t “failing” left and right.
Well, if they’re so inefficient, then why aren’t they failing? It seems to me that an information-shuffling company doesn’t really face the same time-to-market pressure as a thing-making company. Consider a bank or an insurance firm or a credit card issuer. How frequently do their customers sit down and assess the alternatives? Do people switch banks or insurance providers every couple of months? Not really. People don’t think about those things every day. Services like those are in the background of people’s lives. Customers consider switching only when they experience problems dealing with their current provider.
Other sorts of information-shufflers share this trait, even if they do have physical infrastructure to worry about. People don’t switch wireless phone companies or television providers any more often than they switch banks or insurers. It isn’t mainly because of the long-term contracts those companies encourage people to sign; it’s more basic than that. As long as the TV signal doesn’t die during the Big Game and the phone network doesn’t drop their calls too often, customers have more important things on their minds, or at least more interesting things. What this means is these companies can be pretty slow about getting software projects done, and it doesn’t impact their customers very much.
Where’s the crisis, then? Well, the kind of IT worker who entered the field in anticipation of facing creative challenges every day and working on technically-interesting projects alongside deeply passionate and committed colleagues isn’t going to find much professional satisfaction by making routine modifications to the same legacy code base over and over again, all the while dragging their projects through a maze of pointless administrative procedures as if dragging an anchor through a swamp. That’s understandable, but it doesn’t amount to an industry-wide business problem of “crisis” proportions. Nor is the feeling shared by the majority of IT workers, for whom the job is just a job, and whose personal passion in life lies elsewhere.
So, what about the Chaos Report? In a 2005 article in IEEE Software, IT Failure Rates: 70% or 10-15%?, Robert L. Glass explains “what” about it. He suggests (these are direct quotes):
- “Failure” and “success” are tricky words in the software business. How do you categorize a project that is functionally brilliant but misses its cost or schedule targets by 10 percent? Literalists would call it a failure, realists a success.
- People tend to focus on organizations that fail. For example, an early US Government Accounting Office study of projects thatfailed, which came up with an astounding failure rate, turned out to have examined projects already known to be in trouble when the study began.
I agree that “failure” and “success” are tricky words. I often say they reflect binary thinking, because it’s either one or the other. The words don’t allow for a mixture of positive, negative, and neutral outcomes, such as we usually encounter in the mystical land known as Real Life. Besides, those words feel very final to me. Once you’ve failed, you’re done. You might as well slit your belly open with a tanto. On the other hand, once you’ve succeeded, you’re done. You might as well buy a villa in Tuscany and retire.
That technically brilliant project that missed its budget and schedule target had a mixed outcome. In some respects the outcome was positive. In some respects the outcome was negative. Binary, final words like “success” and “failure” tend to steer us away from analyzing why and how those different outcomes occurred. We celebrate or we rationalize, but we don’t bother to analyze and learn. Maybe that’s okay, if you’re not interested in continuous improvement.
I wonder if the real problem with IT project success rates is that we’ve been working from a faulty definition of “success,” lo these many years. The canonical definition of “success” for an IT project is “all features delivered on time and on budget.”
That definition suggests a cost center mentality. We get our budget, our timelines, and our requirements from Someone Else. Our fiduciary responsibility is to use the budget allocation for its intended purpose, and prevent its being squandered for other things, like office supplies and pizza. What’s missing from the definition? Any mention of a customer or of value. People do like to talk about value, of course. Consider our old friend, Earned Value, which re-casts “budgeted amount used up” in the award-winning role of “value delivered.” It’s still just “budgeted amount used up,” when you scrape the makeup off. But value sounds so much better!
So, what might a definition of “success” for a software project look like, if it didn’t reflect a cost center mentality? Maybe it would be something like this: “The business capability our customer requires, delivered at the right time, at the right price point, and with an appropriate level of quality for the purpose.” That definition could stand some refinement, but I like the fact it mentions the customer and his/her needs, and doesn’t just talk about how smoothly we used up someone else’s money to deliver something that was envisioned months or years earlier by someone who was trying to guess what the customer might like to have.
I’m still not too happy about the word “success,” though. The definition may be better, but the word is still the wrong word. You see, as soon as the customer has this thing in hand, this thing that supports the business capability he/she needs just now, at the right price etc., he/she is going to be looking ahead to the Next Thing. So, is it “success,” really, or just one step on a longer journey? Should we check out that villa in Tuscany, or start thinking about the Next Thing?
As words go, “failure” is no better than “success.” Rudyard Kipling got it right when he called triumph and disaster “impostors.” People like to say they learn from their failures. Okay. I guess it means 85-90% of the time (or 29%, if you accept the 1994 Chaos Report), they learn nothing from their experiences. I prefer to learn from the outcomes that I experience; that mixture of positive, negative, and neutral results that offers so many learning opportunities. I don’t learn only from failure, but from all experience, if I can.
Of course, it’s easy for me to learn, as I don’t know much to begin with. There’s no problem pouring Guinness into an empty glass, provided one doesn’t try to do it too fast. Some of the smarter folks out there may be holding a full glass already. That’s a whole nother problem.
I like this post. For me, success is most often a chimera, a status hijacked by vested interests to support their perspective, self-image and position.
Gilb and Goldratt both point the way to a less subjective, single-perspective definition of success. As you may recall, I have coned the term “covalent” to describe a test which I believe any claim for success must pass before worth of serious consideration.
There IS evidential data (cf ISBSG) to illustrate the huge (greater than three orders of magnitude) disparity in performance across thousands of different projects. And they don’t lay some more-or-less arbitrary, binary, success-or-failure filter over the numbers.
As to an operational definition of “success”, aside from the test of covalence, I feel business and technical people alike have WAY too narrow a perspective on “success”. I’d like to see some emphasis shifting to the wider social implications of work and the world of work.
Of course, if we get to see the end of projects (hurrah!), then “project success”, at least, will become irrelevant. I hope I live long enough to see that day.
– Bob
Great post, and I love the Kipling reference. That’s spot on.
> I wonder if the real problem with IT project success rates is that we’ve been working from a faulty definition of “success,” lo these many years.
Absolutely; in fact, many faulty definitions, not just one!
For many years now I’ve been pointing people to the same book, every time this topic of conversation pops up, and I’ll do so again now: Bruno Latour’s “Aramis or the Love of Technology”.
Latour conclusively shows that “success” or “failure” are relative and retrospective terms, that a single “project” (assuming for the sake of this sentence that we know what the word means) can shuttle back and forth between either status for a number of times, before the dust finally settles (and that it takes a long, long time before the dust *really* settles).
By way of illustration, see: http://bit.ly/wFDk0i
Another illustration is how the “poster boy” projects, those used to tout the value of this or that approach, often end up being reevaluated later on. A famous example is the C3 project which was the “poster boy” for Extreme Programming, before it became the main argument of XP critics. A lesser-known example is the New York morgue project which was the “poster boy” for Chief Programmer Teams – the projet that Harlan D. Mills worked on – of which I’ve read that it was actually judged quite disappointing by the customer.
An example outside of software is Le Corbusier’s Villa Savoye, which was considered the shining example of Le Corbusier’s architectural skills, but which actually nearly brought him and his clients in front of a court, as it turned out to be nearly unlivable: it rained inside and in the garage, and other problems of the kind. See http://calitreview.com/48
This is one reason why we should treat the Chaos figures with the greatest caution, even when (as has been trumpeted recently) they seem to provide support for the conclusion we favor, i.e. that Agile projects are “three times as successful”.
I’ve stopped quoting the Chaos report, largely when I found that I was using it as a crutch in my argumentation: an authoritative-sounding numerical statement which was in fact only propping up the weakness and ignorance in my own discourse. Much better to *admit* to my ignorance, and show what steps I was taking to reduce it, such as really thinking about the matter.
So, “success” defined as on-time and on-budget delivery according to a plan – that’s a meaningless definition if the plan you followed was a deluded one to begin with; if you delivered all the features in the requirements, but the features turned out not to have value. It’s a meaningless definition if your engineers are so fed up with the work environment that they quit to go work elsewhere.
There has been at least one alternative definition proposed, by Norm Kerth, which is at least as sensible: “At the end of a successful project, everybody says, ‘Gee, I wish we could do it again’.”
[…] The problem with success (and failure) Written by: Dave Nicolette […]