Posted on 6 Comments

Testing as the core discipline of software delivery

Of the basic disciplines involved in software development and delivery – analysis, design, programming, testing, management, deployment, operations, architecture, etc. – programming is usually seen as the most technically demanding and complicated to learn. Many people look primarily to programming when they assess the effectiveness of their software delivery processes. Historically, the knee-jerk response to slow delivery has been to hire more programmers. After all, software is code, right? Therefore, if there’s a problem delivering software, it must have something to do with the way it’s coded.

After some 36 years in the IT industry, most of it in a hands-on software development role, I’ve come to the conclusion that the core discipline in software development is not programming, but rather testing. Even if programming is objectively more time-consuming to master than the other disciplines, it seems to me that testing is more critical to success.

When we find we are promoting code that has a lot of defects, and we’re spending too much time chasing down bugs and fixing them, what do we do? We add more comprehensive after-the-fact testing at multiple levels, we look for ways to break dependencies and isolate code so that we can test it more thoroughly, we try to improve the effectiveness of our testing methods, and we adopt a test-first approach to building the code in the first place.

When we find our delivery process includes wasteful back-flows from test to development, what do we do? We have programmers and testers collaborate more closely throughout the development process, and we encourage programmers to learn to think like testers.

When we find we are not delivering what the customer really needs, what do we do? We pull testing forward in the process and blend requirements specification with executable examples, adopting a behavior-driven approach and eliminating the need to match up test cases with requirements after the fact.

When we find our applications don’t support the necessary system qualities (non-functional requirements or “-ilities”), what do we do? We add test cases and learn appropriate testing methods to ensure we are aware of the state of system qualities throughout the process, to avoid unpleasant surprises late in the game.

When we find our applications exhibit unexpected behavior around edge cases, what do we do? We increase the amount of exploratory testing we do, and we look for more effective ways to perform exploratory testing.

The solutions to our delivery problems keep coming back around to testing of one kind or another…changing the methods or the timing or the scope of testing. We don’t often change the way we work in the other disciplines, apart from adding more testing activities to them, and asking specialists in those disciplines to learn more about effective testing practices.

It might be worth mentioning that I’m not using the word “testing” in the narrow sense that software testing specialists use it. I mean it in a broader sense that includes “checking” and “monitoring” and so forth. I’m thinking of the old adage that we should begin with the end in mind, and I’m thinking about automating as much of the repetitive and routine work as possible.

Anecdotally, when I’m working in the role of programmer, I find it very useful to approach development by writing test cases that express the functionality I would like to build, before writing any code that might tend to go off on a tangent or lead to gold-plating.

(The purist will insist this isn’t really “testing,” it’s a design technique. That’s true, but it’s not the whole truth. We apply testing skills when we do test-driven development, and the result is an automated regression test suite whose value extends well beyond the initial development work.)

When working in the role of analyst, I find it very useful to think about requirements in terms of how I will assure myself they have been satisfied, in a repeatable, simple, and reliable way. Behavior-driven development is the most effective way I know of at the moment, and it’s even more effective when automated.

When working as an architect or in any sort of infrastructure support role, I find it very useful to apply test-first and behavior-oriented concepts to work such as installing software products, configuring servers, deploying components, and more.

I’ve seen cases when common IT functions such as ETL or batch merges and updates have benefitted from defining the end state first and then using that definition to guide development.

Infrastructure development such as the creation of an ESB benefits from this approach, as well, as it enables us to focus on the features that will be needed by applications currently in the pipeline rather than waiting until every detail of the ESB is complete before providing any services. The old Big Bang approach is at the heart of many past ESB/SOA implementation failures; by beginning with a clear definition of what is needed, we can deliver support as early as possible with no negative business impact.

In an operations or support role, I find it very useful to have automated facilities in place to handle business activity monitoring and to predict imminent system failures before they turn into outages that affect customers; in a sense “testing” the ongoing production operation itself, in real time.

In general, approaching IT work with a tester’s mindset seems to mitigate or eliminate various types of problems. How will I demonstrate the software behaves as expected? How will I know when to stop adding features to this new application? How will I know when this server is about to go off the rails? How can I explore the limits of what this system can support? How can I collect operational and usage information that will help with capacity planning, or predict an impending outage? Many of these questions can be answered through some flavor of monitoring, checking, or testing.

6 thoughts on “Testing as the core discipline of software delivery

  1. And still we are amazed to see that most universities still don’t include Quality & Testing as a mandatory course for programmers.
    BTW – The usefulness of TDD for regression can be argued, tests we write for initial stages are much more time consuming for efficient regression.
    @halperinko – Kobi Halperin

    1. yea agree kobi!

  2. Hi Kobi,

    I’m not sure what you mean regarding tests for initial stages being more time-consuming for regression. It might be interesting to see your tests and code.

    Yes, you can argue all you want. If I were managing your project, I would not tell you to use TDD. After all, most people are smarter than me, and I assume you are, too. I would only tell you that you must not have any regressions, and that I must be able to check the state of your code base at any time, easily, without having to come and find you. If you know better ways to accomplish those goals than TDD, you’ll get no argument from me.

    1. Hi Dave,

      Interesting and a long-ish post. I am still going through it in my mind, but wanted to reply already to this comment.

      TDD can’t prove regressions don’t exist. (BTW, have you ever seen bug-free software?) Nothing can. Moreover, what if the TDD part is buggy? (And yes, obviously TDD is not about testing and yes, when a human person writes code, it’s impossible not to involve some level of testing all the time.)

      On a side track, I don’t quite understand why Kobi would ever want to have efficient regression – or what that would be. I believe he means regression testing, but for some reason is refusing to say that.

      1. Hi Jari,

        Thanks for sharing your feedback.

        It’s true that TDD can’t prove regressions don’t exist. TDD also can’t prove fish don’t live on Mars.

        Disagree that TDD is “obviously” not testing. I think it is “obviously” more than one thing. Testing is one of them.

        When a human writes code, it involves testing on some level all the time – yes. If ad hoc, random, sort of “testing” is sufficient for your needs, I will not ask you to do more. I would ask (if it were my place to do so) that you deliver reliable, clean code. How you do that is up to you.

        Kobi may be using the term “regression” in an informal way to mean “regression testing.” I remember when speaking to a group of testers recently, I asked the group “Who likes regressions?” Two of them raised their hands. Given several opportunities to think again about what they were agreeing to, they continued to raise their hands. I doubt they actually like regressions. I’m pretty sure they were thinking of “regression testing.”

      2. Something else occurred to me that may be relevant to Jari’s comment. When working in the role of programmer, we use TDD specifically as a design technique. He may be thinking in this way when he writes that TDD is “obviously” not testing. But there are two testing-related aspects to TDD, IMO.

        First, in order to use TDD effectively as a design technique, we have to apply a combined programmer-tester mindset. I often hear (and read) programmers say that TDD “causes” bad design. I haven’t had the opportunity to see all their code, but in 100% of cases when I could see the code and the tests I discovered that these programmers (a) did not understand basic software design principles, (b) did not know how to isolate their code to enable repeatable, non-fragile, fine-grained unit test cases, (c) did not actively and mindfully evolve the design guided by the test cases, and (d) often did not bother to refactor incrementally, if at all. If you hit your thumb with a hammer, is it the hammer’s fault?

        Second, after the automated unit test cases exist, they continue to serve as a regression test suite. Ideally, all ongoing changes to the code base are test-driven, including both planned enhancements and production support fixes. In this way, the design of the solution continues to evolve incrementally throughout the production lifetime of the product. Thus, the test cases play a dual role as a design tool and as the first line of defense against regressions.

Comments are closed.