Posted on

Pain today, gain tomorrow?

Many of us who try to help others learn and use good software development practices have made the case that if only a person would try a recommended technique and get used to it, eventually they would see improvements in code quality and a reduction in delivery time, and probably easier maintenance of the code base, too.

It’s generally true that when we begin to learn a new skill we work more slowly than we will after we’ve learned to apply the skill effectively. This seems to make many people hesitant to try anything different than their current practices. The idea of slowing down temporarily in order to improve “later” is hard to accept. People have difficulty accepting any short-term cost in order to achieve long-term gain.

But what does it mean…eventually or later or after a while? When is “eventually?” How much “later?” How long is “a while?” People are afraid the temporary slow-down during their initial learning curve will have too much impact on delivery time, or will take too long to overcome.

Extra work, extra time

Intuitively, people see writing unit tests as an additional task; “extra” work beyond what they currently do, and therefore “extra” time on top of the time they currently spend, resulting in delayed delivery. Never having used the recommended techniques before, they lack a gut feel for the amount of time they might lose.

Everything they currently do takes weeks or months to complete. Therefore, when they consider a new practice they can only envision it taking weeks or months to learn. After all, everything else pertaining to software development takes weeks or months.

A couple of friends of mine who do the same sort of work have made comments recently, independently of each other, that started me thinking about this question.

How is it that people assume the impact of learning curve time will be severe, and the purported benefits of good practices will take “forever” to materialize? Our experiences (different as they are) have been that the benefits of good practices appear almost immediately.

One comment, from Bryan Beecham, was along the lines of “…consider giving TDD a try. Is it really going to take longer? Consider how much time you’ve spent already?”

It reminded me of a separate comment from James Grenning, when he was explaining some of the data he has gathered from participants in his online TDD course that indicate the benefits of TDD are felt immediately; there’s no “in the fullness of time” involved.

I thought about some of my own past experiences, which are in line with James’ data, and began to wonder if we’re explaining the benefits wrong when we suggest the value only comes “later.” If the benefits come quickly, and with very low effort, why don’t we explain it that way? Are we afraid of repercussions, should the new practice not go well initially?

A story

I remember a team from a client a couple of years ago that estimated the time it would take them to refactor their code base. I had advised them to consider a list of specific refactorings, to help them position themselves to add several features their management had requested over the following months.

Without the refactorings, every change to the code would have increased technical debt and made it harder to make the next change in the list. The code base was small (so far), but already quite messy. Building more and more features on top of it would have created a nightmare, and the longer-term development effort would have bogged down.

They estimated it would take them three months to complete the preparatory refactorings. Their manager insisted they could not wait three months even to begin working on the new features. It was out of the question.

I sat down and did the refactorings myself, and showed them. It took about 50 minutes, including adding missing unit test cases. It took longer to walk them through the changes and explain the rationale than it had taken to do the refactoring. The team accepted the changes without much discussion and began working on the new features.

Why the disparity? None of the team members had ever refactored code before. When their manager asked them how long it would take for them to do something completely unfamiliar to them, they erred on the side of caution. That’s normal.

Another story

On another occasion with another client, I sat down with one of the developers who supported a certain application. It was a Java Spring MVC application in which 90% of the code lived in the no-arg constructor of a spring-loaded bean. That method instantiated about 40 collaborators. The rest of the application lived in a single utility class containing about 50 static methods. He insisted it would be too time-consuming and too risky to try and clean up the code.

I started to show him how to pull the code apart in small pieces, little by little. In about ten minutes, he saw the value of the approach and took the lead in refactoring more of the code. From then on, he gradually remediated the code every time he had to get into it to make a change. He showed his team mates how to do it. He was pretty happy with the approach.

Yet another story

Another team at another client challenged me to refactor any small portion of a certain Java method at the center of one of their main business applications. The method was 12,000 lines long. But the entire thing consisted of the same sort of if/else structure, over and over again, with slight variations.

This is what we used to call “wallpaper code” back in the old mainframe days. If you printed the program source on greenbar paper and then hung strips of the printout on the wall, it would resemble a wallpaper pattern. We knew what to do even back then, before Martin Fowler coined the term, “refactoring.”

It was straightforward to extract smaller methods from the long method, and to begin to see duplication and near-duplication in some of the extracted methods. There was a good deal of duplication hidden in the numerous if/else blocks, and it would have been feasible to clean up the code; but they had already decided to replace the whole application. In this case, a few hours of refactoring would have saved them the cost of the rewrite project, as well as the opportunity cost of having a team occupied with the rewrite instead of doing value-add work.

How much time have you spent already?

Bryan’s question is relevant. If you’re afraid to try TDD because of the “extra time” it would take, have you considered the “extra time” you already spend before your code is really in proper shape to release? Just slinging code as fast as you can type doesn’t accomplish that.

What about the time you spend in a debugger before you feel confident enough in the code to hand it off to a tester? (Yes, I know hand-offs are not contemporary good practice, but you already said you don’t practice TDD or incremental refactoring or trunk-based development or pair programming or mob programming, and it doesn’t sound likely that you’re doing continuous integration, either, so what is one to assume about hand-offs in your world?)

What about the time you spend trying to check functionality after having written code? Sure, you can use a tool that generates test cases based on the production code, but then all you would learn is that the code “does whatever it does.” That can be a useful starting point to get started with a legacy code base (e.g., creating an approval test), but it isn’t a good way to drive new functionality forward.

I mean, it’s pretty unsurprising that the code “does whatever it does.” That’s exactly what all code does. How do you find out if the code does what it’s supposed to do? How can you find out if the code might do something you really, really wish it wouldn’t do? You have to count that time as well as the original typing time. Otherwise, it isn’t an apples-to-apples comparison of TDD vs. not-TDD.

A different message?

But even without all that, James’ results suggest you would experience benefits almost from the moment you first tried to use good development practices. The idea that you have to struggle through some sort of painful, long-term process before you begin to see improvement may simply be wrong.

If that’s the case, then maybe we should not try to convince people to use good practices on the basis of “long-term benefit.” Maybe a message like “stop wasting time by not using good practices” would be more to the point.