Posted on

Discipline

Premise 1: Self-discipline is the only meaningful form of discipline.

Premise 2: Simpler solutions are usually preferable to more-complicated solutions to the same problem.

Premise 3: Without self-discipline on the part of those using it, no process model or method or framework or tool of any kind provides the value its proponents believe it can provide.

Premise 4: A formal process that imposes strict rules tends to teach people to follow rules rather than to cultivate self-discipline.

Premise 5: The longer and more deeply a person invests in a given idea or habit, the more difficult it becomes for that person to let go of the idea or habit, or even to question it.

Premise 6: In the diffusion of any innovation, the main reason Innovators and Early Adopters achieve better results than Late Adopters and Laggards is the former are predisposed to try unfamiliar things and take risks, and not primarily because the particular innovation is intrinsically better than whatever it replaces (even if it happens to be a little better on the merits).

Premise 7: The factor that makes an innovation useful is that it comes at a time when it is needed, in a context where it is needed, and to people who are in a position to make use of it; not necessarily that the innovation itself is “best” in a general or permanent sense.

Corporate IT in the 1980s

Throughout the 1980s, people attempted to find effective ways to build and deliver software products. Conventional wisdom of the day was that the best way to do this work was via a strictly linear process with every step spelled out in excruciating detail, and everyone involved in the process following formally-defined rules to the letter at every stage.

Instead, the result of that approach was the outcomes documented in the old CHAOS reports, which indicated most software projects were either never completed at all, or delivered unsatisfactory results, or exceeded budgets and timelines by a significant amount.

And so the 1980s and 1990s saw a number of attempts to define more-effective ways to do software development and delivery, from the V-Model to the Spiral method to the Objectory Process to Evolutionary Project Management to the Rational Unified Process to Crystal Clear to Extreme Programming to Scrum and beyond.

How did corporate IT get to be in such a poor state by 1980, and if results were unsatisfactory and costly, then why didn’t things change for so long?

The headwaters of the waterfall

Prior to the early 1960s, computers were used mainly by government agencies and military organizations. Then large corporations began to see the value of computers. This happened at a time when “software engineering” was not a widely-understood concept, and when effective methods of software design, testing, and management were still at an early stage of development. Margaret Hamilton had coined the term in 1966 or shortly thereafter; there was no term for it before then. She and her team were figuring out how to do it, and they were working in an engineering context; hence “engineering.”

Pioneers of software engineering were still working out the fundamental principles – many of which were presented and published in the two NATO Software Engineering Conferences in 1968 and 1969 – even as large corporations were figuring out how to make use of computers in business. The discipline of software engineering was not yet solidified or clearly defined.

Early programming languages were based on the assumption that the people who needed the software would write it themselves. There was no concept of a job title such as “computer programmer.” Software development was assumed to be a relatively trivial exercise, and not a full-time occupation.

APL

Kenneth Iverson began working on a mathematical notation for manipulating arrays at Harvard in 1957, and continued the work at IBM from 1960, where he and Adin Falkoff developed a programming language based on the notation. Their paper, published in 1962, was the origin of the term “programming language” (as far as I know), as the authors considered their notation to be a “language” due to the fact it had a strict syntactic structure. The language was called APL, or “A Programming Language.” The name wasn’t as humble or as obvious as it sounds today – the idea of a language for programming computers was new at the time. APL was meant for mathematicians and engineers to use in their own work. Iverson’s 1962 book is available online: A Programming Language.

COBOL

Similarly, the work of Grace Hopper and her team on FLOW-MATIC in 1957 was intended to enable business people – mainly accountants – to write programs to support their own work. Christened COBOL, or Common Business-Oriented Language, by the CODASYL committee in 1959, it gained market share quickly thanks to friendly pressure from the US Department of Defense to encourage computer companies to develop compilers.

COBOL source is structured similarly to a business document (such as a response to a request for proposal, a type of document familiar to military folk), with an Executive Summary (Identification Division), description of external dependencies (Environment Division), enumeration of parts and materials needed (Data Division), and details about the proposed actions (Procedure Division). This design was meant to be accessible and usable by accountants and other business people who were not computer scientists.

Higher-Order Software

Dan Lickly and Margaret Hamilton took some of Hamilton’s ideas to market after the Apollo program was canceled. They formed a company called Higher Order Software. Based on Hamilton’s idea of the same name on the Apollo program, HOS was originally about checking source code for structural problems – what we would call static code analysis today – but ultimately was extended into a system we would recognize as Computer Aided Software Engineering (CASE), which became a “thing” in the 1990s.

They called it “higher order” because it was the first time (as far as I know) software was designed to operate against other software, rather than directly against a problem such as spacecraft guidance or business accounting. So it was “higher” – one level of abstraction above – software that solved a specific domain problem. The genesis of it was the goal on the AGC project to achieve zero defects. Hamilton had learned the team were using a method they called Augekugel to verify the software. When she learned that word meant “eye-balling,” and the checking was done manually by one team member, she recognized it as a good candidate for automation.

The age of roll-your-own software

That story is not directly pertinent to the subject of this post, except as an illustration of the fact there were no well-defined methods or techniques in software engineering in the early years. The desire for such formal methods led to the popularity of the linear approach we later came to call “waterfall.” It was derived from the approach that had been proven effective in the engineering field. People were not sure exactly what sort of animal “software” really was, and they related the work to engineering.

Apart from well-known examples like APL and COBOL, and HOS as an early example of 4th-gen languages, there were many other programming languages designed to model solutions in a way that was intuitive and usable by end users of computers in specific domains. The expectation was that people would write software as a part of their regular jobs, and not that they would ask programming specialists to develop programs for them.

Computers go mainstream

But when computers came to be widely used in the business sector, the people who needed programs were neither interested nor skilled in working with computers. It was one thing for the Apollo Guidance Computer team to write their own software; they were a hand-picked group of top-tier engineers and scientists. It was quite another to ask office workers in the middle of the bell curve (if I may put it so indelicately) to do something as bizarre as writing their own computer programs. It would have taken up far too much of their time.

And so the demand for specialists to write computer programs preceded the emergence of any sort of standards around software engineering or any available training or education in universities or technical schools. Most of the people who took up the challenge of working as computer programmers in those days had neither formal education nor a white-collar mindset about work. They figured things out on the job using their intuition, creativity, and whatever ad hoc methods they could come up with on their own.

Business management wanted to bring the chaos under control. The salient question for this exploration is: What assumptions, mindset, and approaches did managers of that era apply to the problem? Those factors ultimately created the working conditions that led software professionals to seek alternative methods. Had working conditions been better and had the outcomes of software projects been better in the 1970s and 1980s, we would probably still be using linear SDLC processes today and feeling as if everything was fine. People sought alternatives to the working conditions and software development methods of the 1970s and 1980s because everything was not fine.

Function follows form

In that era, general management assumptions were based on Taylorist principles. Key among those is the assumption that humans are essentially nothing more than machine parts, with the organization as the machine. Management methods that had been developed at southern plantations in the US and early industrial era factories were applied to white-collar workers in office buildings, pushing paper and doing arithmetic…and writing software. Formal engineering-based methods seemed perfect for business managers of the time, as they provided a clear set of rules and process steps for computer programmers to follow.

A second aspect of the Taylorist approach is to focus on resource utilization instead of throughput, as we do today. Humans were considered “resources,” and the rationale was that resources can operate efficiently if work is divided into functional specialties and allocated to workers who had each specialized skill set, under the direction of knowledgeable managers. This approach led to the “functional silo” problem that still plagues some larger enterprises today.

How does this relate to the topic of discipline, and particularly self-discipline? Taylor’s view of workers was that they were mere brutes who were lazy by nature and could only be stirred to productive work by heavy-handed management. Among many other quotes that illustrate this view, Taylor wrote, “In our scheme, we do not ask the initiative of our men. We do not want any initiative. All we want of them is to obey the orders we give them, do what we say, and do it quick.”

This mentality was the basis of 20th century corporate management. The result was corporate structures, standard procedures, human resource practices, and organizational cultures designed to force workers to be productive against their will by following the rules laid down for them by management by rote, without question. In the field of corporate information technology, that is what caused the 1980s to be…well, to be the 1980s.

Back to the 1980s

As mentioned before, the 1980s and 1990s were a time of innovation in software engineering methods and practices. Software professionals were highly motivated to seek better ways of working, as most were in work environments that were stifling and stressful.

Many worked more than 80 hours per week nearly all the time, and missed weekends, holidays, vacations, and key events in the lives of their children in an attempt to keep up with the demands made on them at work. The IT field was characterized in part by ease of finding a new job and generally good pay, but also in part by nervous breakdowns, divorces, suicides, and career changes motivated by physical and mental health concerns.

Federal Reserve chairman Alan Greenspan responded to the situation by instituting the H-1B visa program in the early 1990s. The program was based on the fabricated “software crisis,” which suggested there were insufficient qualified US workers to meet the demand for talent in the IT space. The program enabled large numbers of foreign workers to enter the US and take IT jobs. They were willing to work for low pay, and they were hesitant to complain about Tayloristic management practices for fear of losing their eligibility to stay in-country and work. The large scale of the program set a tone for IT work generally, slowing the rise of salaries and creating risk for IT workers who wanted fairer treatment.

The one outcome the H-1B program did not achieve was to “solve” the made-up “software crisis.” Software projects continued to go over cost and budget, to fail to meet customers’ needs, and often to fall apart before they were ever delivered; all while maintaining high pressure on IT workers for no reason except the faulty assumptions of Tayloristic management.

When the program was called into question and caps were proposed in the 2000s, Greenspan was there to defend it, telling a US Sentate subcommitte in 2009 that the H-1B visa quota was “far too small to meet the need” and that the proposed caps would protect “US workers from global competition, creating a privileged elite.” Setting aside the irony of a member of a privileged elite denouncing others as a privileged elite, the H-1B program has enabled corporate management to sustain outdated methods and unhealthy working conditions far longer than normal free-market economic forces would have allowed.

The 1980s and premises 1, 2, 3, and 4

Remember the premises that were stated at the beginning of this piece? Consider premises 1, 2, 3, and 4 in light of software delivery methods prevelant in the 1980s and 1990s.

Premise 1 states self-discipline is the only meaningful form of discipline. Tayloristic management and a rigidly-defined SDLC process impose a kind of “discipline” on workers for all the wrong reasons and in all the wrong ways. People operating under that sort of system have no opportunity to practice self-discipline. All they can do is to follow rules.

Premise 2 expresses a preference for simple solutions over complicated ones. The detailed rules for 1980s-era SDLC processes are anything but simple. Their complexity may be one of the causative factors for generally poor results in the software field during that era.

Premise 3 suggests that self-discipline is necessary to achieve good outcomes from any process or method. When self-discipline was subtracted out of the equation in the 1980s, companies limited their ability to achieve positive outcomes. This point harks back to Ken Schwaber’s phrase, “illusion of control.”

Premise 4 states that when standard procedures are designed to make people follow rules, all they will learn is how to follow rules. They will never learn self-discipline. This point is critical to understanding why it has been so difficult for proponents of “agile software development” to help software development teams embrace ideas that call for risk-taking, transparency, trust, experimentation, collaboration, and short feedback loops. For generations on end, software professionals have been trained to follow rules…or else.

How can they be expected to do anything else, short of a mindful, intensive, guided, and sustained effort to change? Methods that are simple in principle are all but impossible to institute in large organization; not because, as a Scrum mantra has it, they are “easy to learn but hard to master,” but because people in this field have no experience in cultivating self-discipline. Teams that are self-disciplined, in my experience, achieve good outcomes regardless of the formal methods they use. Therefore, it is not the method itself that leads to good or bad outcomes. Given a modicum of self-discipline and an organizational context not based on Taylorism, Scrum is both easy to learn and easy to master…and probably unnecessary.

Several alternatives to the rigid linear SDLC (system development life-cycle) were elaborated and applied in the real world, with mixed results. This isn’t the place to list them all or summarize their histories, except to support further exploration of the topic of self-discipline. Let’s skip ahead to the Rational Unified Process (RUP).

Whatever happened to RUP?

The Rational Unified Process, or RUP, represented a significant improvement over the traditional linear SDLC model for software delivery. This excerpt from the book, The Rational Unified Process: An Introduction, provides a summary of its evolution: (book excerpt).

RUP was not the first attempt to improve on the linear SDLC model. It was built on a foundation of earlier efforts. Compared with most of the earlier attempts to improve software delivery work, RUP seems to have caught on in the market in a big way. It became the basis of many consultants’ and trainers’ business models and professional identities.

RUP was legitimately a big improvement over the linear SDLC. It emphasized revisiting requirements repeatedly, rather than assuming all requirements could be nailed down in advance of starting the work. It emphasized taking feedback from early, partial releases of software and folding that information into the still-emerging design of solutions. Thanks to its iterative approach, RUP provided information to decision-makers about costs, timelines, and business value that enabled some projects to be terminated early, when they had delivered “enough” value to customers, rather than lingering on and on for no particular reason, as many linear projects did. RUP had the practical goal of reducing business risk in software projects. Prevailing methods of the era tended to maximize risk and cost while hiding information about those things from key stakeholders. There is no doubt RUP was better.

But RUP was not the last word in process improvement for software delivery. Others were exploring still more-effective approaches. At around the time RUP was peaking – that is, when RUP adoption was reaching the Late Majority group and RUP consultants were enjoying phenomenal income levels – the next generation of software delivery methods were reaching the Early Adopter group.

So, what happened to RUP? It was supplanted by a new generation of methods. Those methods built on what had gone before, including RUP itself, but they had different branding and somewhat different structures, buzzwords, and processes. For RUP consultants at the time, the branding was critically important. Their professional (and sometimes personal) identities were entirely tied up with RUP branding.

The method that took center stage was Scrum. In part, the rapid rise of Scrum can be attributed to the fact it addressed organizational issues of the late 1980s and early 1990s even more effectively than RUP. Schedule slippage was still a big issue for large software projects. Scrum’s time-boxing mechanism helped keep the work moving forward, and its concept of an ordered Product Backlog that has a single responsible individual, the Product Owner, helped maintain focus on customer value while limiting time wasted on low-value activities.

Possibly for that reason, when Scrum proponents described the advantages of Scrum, RUP consultants reacted by insisting there was nothing in the formal definition of RUP that precluded using it in a Scrum-like way.

But it seems to be an aspect of human nature that we tend to follow the course of least resistance. RUP still included quite a number of rules, and had a certain amount of formality baked into it. There were a number of diagrams and documents of various kinds that all had standard formats and recommended conventions. There was a lot to learn. Scrum could be applied to good effect without having to learn quite so many complicated rules, standard formats, and so forth.

Although RUP provided all those templates and standards to simplify matters for users, allowing people to omit artifacts they didn’t need in their context or for their project, most software development organizations could not grasp the idea of omitting things. Their long indoctrination to follow all the rules, no matter what, led them to assume every RUP artifact had to be produced on every project.

Both the rigid approach to RUP on the part of its users and the response of the RUP consulting/training community to the rise of Scrum illustrate premise 5 – the longer and more deeply a person invests in a given idea or habit, the more difficult it becomes for that person to let go of the idea or habit, or even to question it. It was almost impossible for someone with a deep professional investment in RUP to “pivot” to a newer method. Most of them could not even see Scrum in any way other than through a RUP lens. All their comments about Scrum were in terms of RUP.

What’s happening to Scrum?

Today, Scrum is at its peak, if not beyond its peak. It has been “adopted” (for some definition of that word) by nearly every company. Diffusion is well into the Laggard group at this point. (Yes, you and I both know of teams and companies that are doing Scrum-in-name-only. So it goes with the Laggard group.) Lighter-weight approaches, largely based on Lean thinking, are in the Early Adopter and Early Majority phases.

So, what’s happening to Scrum? It is starting to be supplanted by a new generation of methods. Those methods build on what had gone before, including Scrum itself, but they have different branding and somewhat different structures, buzzwords, and processes. For Scrum consultants, the branding is critically important. Their professional (and sometimes personal) identities are entirely tied up with Scrum branding.

Scrum is legitimately an improvement over RUP. It focuses on breaking functional silos, improving the effectiveness of communication between different stakeholders, exploiting the power of trust and collaboration, sharpening the focus on business value, and using the time-box mechanism at multiple levels to drive both effective delivery and continuous improvement. All these factors were pertinent in corporate IT organizations of the 1980s and 1990s.

But Scrum is not the last word in process improvement for software delivery. Others are exploring still more-effective approaches. At around the time when Scrum adoption was reaching the Late Majority group and Scrum consultants were enjoying phenomenal income levels, the next generation of software delivery methods were reaching the Early Adopter group. Now they’re in the Early Majority.

The method that is taking center stage is Kanban. In part, the rapid rise of Kanban can be attributed to the fact it addresses organizational issues of the 2000s through 2020s even more effectively than Scrum. Coordination of work at a scale larger than a handful of Scrum teams is problematic. Solutions tend to be based on adding more rules rather than streamlining existing processes (e.g., SAFe, DAD). When teams fall into a hyptnotic stream of “stories” that are all mostly the same, without clear visibility into the larger context, the situation leads to “Zombie Scrum.” The emphasis on longstanding, stable teams leads to career stagnation for individual engineers.

Large-Scale Scrum, or LeSS, offers an interesting alternative to scaling Scrum: De-scaling the organization within which Scrum operates, to enable a method designed for small-scale operation to function properly. This has been a good solution in some organizations. But other factors have been changing in the IT field that may make Scrum less interesting. These are mainly technical practices and collaborative work techniques rather than process changes. They make some of the issues Scrum was designed to solve go away.

Scrum’s time-boxes helped move the work forward in an era when teams promoted software to production only every few weeks or months. Today, continuous integration and continuous deployment are baseline practices. When the frequency of commits is much shorter (say, every few minutes) than any reasonable iteration or sprint length (say, one week), what’s the value of the iterations? Kudos to Scrum: The problem has been solved! Hurrah! We can stop taking the medicine now.

Another area of improvement has been in collaborative working methods. Teams today often use mob programming (also called ensemble work or samman). That practice alleviates the need for some of the standard Scrum events, as team members are always together and always know the status of everything. It also alleviates the need for some other traditional practices, notably formal code reviews.

During the pandemic, many software developers participated in remote mob programming events. Through these events, we learned how to work collaboratively in a 100%-remote setup. Many people have stated a preference to continue working remotely after the pandemic has passed. Remote mobbing techniques enable high collaboration without requiring physical collocation. This, too, reduces the issues Scrum was designed to address.

Kanbans’ emphasis on finishing rather than starting, limiting work in process, and concentrating effort on the most important piece of work at the moment, lead to smoother flow than any time-boxed process. Metrics and methods derived from Lean help teams and organizations zero in on the most problematic areas in their delivery systems. Kanban is explicitly an improvement method and not a process framework.

In companies trying to scale Scrum, perhaps the single most difficult area is the roll-up of metrics from the team level, through organizational layers, ending up on an executive dashboard. The metrics usually applied to Scrum just don’t roll up in that way. Kanban doesn’t have that problem. It’s easy to set up a multi-level Kanban system with consistent metrics at all levels.

Scrum suffers from the same phenomenon as RUP did in its day – many consultants and trainers are so closely identified with Scrum that they cannot move away from it without impacting their livelihoods negatively. They cannot “pivot.” They have tied themselves so tightly to Scrum-as-a-brand that they have no choice but to ride that train to the end of the line.

Their reaction to the growing popularity of Kanban is defensive.

One argument is that Kanban is “too loose” for most teams. Scrum provides more rigorous guidelines. Therefore, teams use Kanban as an excuse to avoid the discipline built into Scrum. The problem with that argument is Scrum imposes discipline through its rules. It’s true that without self-discipline, neither Scrum nor Kanban (nor RUP nor “waterfall”) will be effective. But that isn’t a factor inherent in any of these methods. A method that imposes discipline from outside cannot help people cultivate self-discipline.

Another argument is that Kanban is only for teams that don’t have a Product Backlog. This represents a superficial understanding of the two methods. Either can be used for planned work and for unplanned work, or a combination. Kanban tends to be a little easier to manage when a team has a mixture of planned and unplanned work, but with self-discipline (yes, there it is again) a team could use either of these methods effectively. Of course, they could also use no-method-in-particular effectively.

A third approach by Scrum devotees is to cast Kanban as a kind of “advanced form” of Scrum. They claim a team must begin with Scrum (to develop discipline…hmm) and then, maybe, they can “grow” into Kanban. But remember premise 4: The problem of self-discipline isn’t solved by adding more rules; that only teaches people to follow rules.

People speak of “scrumban” as if it were a branded method just like Scrum or Kanban as such. But if memory serves, it was a coinage by Corey Ladas in a 2007 blog post in which he shared the story of a Scrum team that took a couple of useful ideas from Lean, such as WIP limits and classes of service. Scrumban wasn’t, and still isn’t, a “thing.”

The idea of Kanban as “advanced Scrum” reminds me of early Christianity, when people assumed in order to become a Christian one first had to become a Jew. As becoming a Jew (for men) meant getting circumcised, and that is not a procedure that is much desired among adult males, recruiting was down. Enter Saint Paul, who explained that one could become a Christian directly, from any point of origin; no circumcision necessary. Et voilĂ ! recruiting went up. Go figure.

Similarly, a team can set up a Kanban system without having to endure the rites of Scrum. To get value from it, they will need to cultivate self-discipline. All this jumping from one method to another over the years has not resulted in much improvement because of the lack of self-discipline, and not because each method wasn’t a good idea at the time.

Both the rigid approach to Scrum on the part of its users and the response of the Scrum consulting/training community to the rise of Kanban illustrate premise 5 – the longer and more deeply a person invests in a given idea or habit, the more difficult it becomes for that person to let go of the idea or habit, or even to question it. It is almost impossible for someone with a deep professional investment in Scrum to “pivot” to a newer method. Most of them cannot even see Kanban in any way other than through a Scrum lens. All their comments about Kanban are in terms of Scrum.

Sound familiar? Say “yes” unless you want me to keep on typing, because there are many more examples. It will happen to Kanban at some point, too. People are still looking for better ways to work. This post isn’t to “sell” Kanban; it’s to highlight the importance of self-discipline. In part, too, it’s to suggest we can’t encourage self-discipline by imposing rules, but maybe we can by expecting people to exhibit it, and allowing them to experience minor negative consequences when they don’t (with appropriate guidance for learning).

Diffusion of innovation

Premise 6 states the main reason Innovators and Early Adopters achieve better results than Late Adopters and Laggards is the former are predisposed to try unfamiliar things and take risks, and not primarily because the particular innovation is intrinsically better than whatever it replaces.

In the early days of RUP, XP, Scrum, and other methods, Early Adopters and Early Majority enjoyed very good results. Those results drove the method forward in the market. But as a new method penetrated the Late Majority group, the relative improvement seemed to fall off sharply.

Usually, corporate management assumed the reason for the fall-off in value was that the new method “didn’t scale.” <sigh>Scale, scale, scale</sigh>. To make the new thing scale, they added more and more rules and complexity to it. I suggest the reason for the fall-off in value lies in the predisposition of people who end up in the various “groups” Rogers posited in 1962.

People and the companies they start or join share a predisposition toward risk-taking, trying unfamiliar things, rebounding from “failure,” and other factors relevant to success. Those in the Early Adopter and Early Majority groups tend to be more open to these things than those in the Late Adopter and Laggard groups. If something doesn’t appear to be working immediately, the Early Adopters will try to make it work and even help improve it so it will work better for others. The Laggards will find a way to blame the new thing and look for excuses to revert to the old thing.

It makes no difference what the “thing” is. RUP, Scrum, or Kanban aren’t difficult because they’re inherently difficult. They’re difficult for Late Majority and Laggard companies because of intrinsic characteristics of those companies.

In case you’re still unsure at this point, if you work for a company that is only now starting to pay attention to Scrum and agile – ideas that are 30 years old or more – you are in the Laggard group. Change is hard because the system in which you operate is predisposed to maintain the status quo; change is not hard because of the particular “new thing” you’re trying to adopt just now.

The right place at the right time

The final premise of this piece is that something is useful if it comes about at the right time and place and is available to people who need it and are in a position to use it. A thing isn’t automatically useful “just because,” with no context.

The ancient Greeks knew about steam power. They didn’t use it to make locomotives. They used it to open and close doors to important buildings and to make impressive gadgets such as flying robot pigeons. They didn’t seem to apply the technology to anything we, in modern Western cultures, would consider “useful.” They wasted it on things that were cool and fun.

Why didn’t they build locomotives? Perhaps locomotives would not have solved contemporary problems for them. They had the know-how, but not the need.

The ancient Incas knew about the wheel. They made toy carts for kids to play with. So, why didn’t they build wagons? Because their far-flung empire was located in a mountain range. Those famous Inca Roads have stairs. A lot of stairs. They had the know-how, but not the enabling conditions.

So it is in our own field. Higher Order Software got a few government contracts, probably not least because of Lickly’s and Hamilton’s personal connections, but the idea of static code analysis and the idea of CASE tools didn’t catch on industry-wide. Why not? The time wasn’t right. The market wasn’t ready. Software professionals in the 1990s weren’t any smarter or more creative than those in the 1960s. In the 1990s, the time was right for CASE and static analysis. CASE didn’t stick. Static analysis did. Things change…even if we don’t want them to.

Consultants: Learn to pivot

It’s one thing for clients to be stuck in the past. If they weren’t, at least to some extent, we wouldn’t have work to do. But it’s unbecoming of us, in the technical coaching and consulting field, to fall prey to what Ivar Jacobson has called “method prisons.” We’re supposed to be the ones who are guiding others, not the ones who are unable to let go of an idea whose time has passed, however beneficial in its proper time and place.

Let’s stop teaching specific methods and frameworks (or at least stop presenting our favorite one as The Answer To All Things), and let’s start encouraging self-discipline, continuous improvement, and awareness of principles.

And when something new comes along, let’s try to respond with curiosity rather than defensiveness.

Most of those new things will fizzle, but one of them will be the next RUP, Scrum, or Kanban. Be prepared to recognize it when it comes, unless you want the name of your favorite method prison to be engraved on your career’s headstone.