Posted on

Beware the golden hammer

Change agents and coaches often ask me how to overcome resistance. The coach wants a team to adopt a practice or technique that he/she “knows” is good. Why won’t the team members consider his/her suggestions? It’s possible that they just don’t see any pressing need to change the way they work.

It turns out that experienced professionals aren’t slumped in their offices in a puddle of their own drool, dazed and defeated, surrounded by the smouldering remains of failed projects, hoping a savior will appear and teach them a magic trick that will make them better at their jobs. Notwithstanding industry studies that highlight general problems in software delivery, development teams aren’t “failing” left and right. Software development organizations do deliver results. Few of them deliver results as effectively as they could do, but they aren’t necessarily aware of this, or they don’t see it as especially important because it isn’t causing them any real headaches. People have settled into a comfort zone where they are respected, they feel successful, and they are well paid. When a change agent comes charging at them with golden hammer in hand as if they were in need of rescue, insisting that they change the way they think about and approach their work, why should they listen? The truth is they shouldn’t listen. They’re doing exactly the right thing when they “resist.”

What people hear when coaches talk

This is the sort of thing I hear from coaches:

The senior programmer on the team I’m coaching adamantly refuses to try [this practice], and the other team members follow his lead. I’ve explained the value of it, and I’ve shown him studies, but he insists it’s nonsense and the studies are not meaningful. I’m sure [this practice] would help the team, because it helped the last team I coached. How can I overcome his resistance?

This is what the “resistant” senior programmer hears when the coach encourages him to try [this practice]:

Even though you’ve never been on a “failed” project, you’ve actually been doing your job all wrong for the past 15 years. You need to listen to me and do as I say. If you “resist,” then you’re a fool. Furthermore, all of this will happen right in front of your manager and your junior colleagues.

This is the normal and justifiable reaction by the team when a coach arrives on the scene:

We deliver working software and our customers aren’t totally frustrated; at least, they haven’t outsourced us yet. We’ve trained our management to have certain expectations of us, so that we don’t have to burn ourselves out just to get a “satisfactory” performance rating. We’re pretty good programmers; we can get code out the door one way or another, and no is complaining. It isn’t as if our stakeholders actually know what they need or want anyway, or that they could consume new software features any faster than we deliver them already. We have cultivated the perception within our company that we are experts in the field of software development. That works for us; life is good. Now along comes a “coach” who has a bag of tricks that he likes. He wants to shove his favorite tricks down our throats. He’s telling our management that we aren’t really all that good at software development, and if only we would adopt his bag of tricks, we would become superstars. We’re going to feed his brains to the mall zombies and crush his testicles in a garlic press (not necessarily in that order).

Now, it’s quite possible that [this practice] would indeed help the development team in question. That isn’t the mistake. The mistake is to recommend a solution before any problem has been identified and understood by the group, so they are open to the idea that some sort of solution is called for. A solution in search of a problem often does more harm than good. When someone brings you a solution, and you don’t have a problem (or you aren’t aware of the problem), then all kinds of alarms should go off in your head. Otherwise, you wouldn’t be thinking for yourself. People aren’t resisting, they’re just demonstrating healthy survival instincts. You don’t honestly expect people to throw away all the assumptions, training, and experience on which their successful careers have been built just because you say so, do you? Would you do that, if the roles were reversed?

Don’t overcome resistance — avoid it

I’ve found the best way to solve a problem is to change the frame of reference such that the problem ceases to exist. Then we don’t have to solve the problem at all. It’s generally easier to do nothing than to do something. With that in mind, what we want to do is change the frame of reference such that there is no resistance. Then we don’t have to overcome resistance at all. If you are as lazy as I am, then you will appreciate this line of reasoning.

Those of us who are familiar with contemporary software development and management practices will already have a number of ideas in mind when we assess the current state of an organization or team, based on the low-hanging fruit we can see when we walk in the door. Most of the places we visit exhibit similar characteristics; there are organizational patterns we see time and time again. In my view, it’s important to avoid mentioning any specific solutions at first, even when we’re pretty sure at the outset that we will end up recommending certain improvements.

Instead, what we want to do is to understand the forces at play in the organization and identify specific opportunities for improvement based on the people’s own perception of their problems. They have to tell us what their problems are, what their goals are, where their pain points are. We can help them recognize and quantify those things, but ultimately we are there to help them solve their problems, not just to implement whatever our favorite Thing may be. Our first goal has to be to help them recognize their own problems; once they perceive the problems for themselves, they will ask for help. Everyone resists a hit on the head with someone else’s golden hammer. No one resists solving their own problems.

Besides all that, what appears obvious at first glance quite often turns out to have non-obvious root causes. A little analysis couldn’t hurt before offering recommendations. It can save us the trouble of trying to solve the wrong problem.

Why don’t clients know what they need?

Most of the time when we are called in to an organization, management has a vague sense that things aren’t quite as good as they could be, but they don’t necessarily understand what to do about it. It seems to me there are a few reasons why people may not be aware of the opportunities for improvement in their own organizations:

  1. We humans tend to slip into habits and follow a routine. We don’t maintain constant vigilance in re-assessing our results and our methods over time. We “experts” are no better about this than our clients. We enjoy a slight advantage over our clients in that we move from one environment to another, one set of circumstances to another, and we have to analyze and solve problems in different contexts. We have to reset our brains over and over again. The clients, in contrast, work in the same environment for years on end; they aren’t forced to re-examine their situation from scratch over and over again…so, they don’t. We wouldn’t either, under similar circumstances. Their situation just feels normal to them. It doesn’t feel like a “problem,” so they aren’t looking for a “solution.”
  2. Many of the innovations in software development practices, project management, and business management, as well as recent findings in psychology and organizational sociology, have only begun to make inroads into industry within the past 10-15 years. Senior managers were trained prior to that time; younger technical professionals graduated from university before the new ideas were incorporated into university curricula (they still aren’t, for the most part). They are at Level 1 of the Conscious Competence Ladder. They may be looking directly at a problem and not realize it is a problem at all. It’s the way they learned to do things in school. It’s the way things have always been done. It’s the way that is specified in formal, published standards.
  3. Strangely, very very few people in business have any knowledge of or skills in root cause analysis or systems thinking. When they get the sense that something is wrong, they use stochastic methods to jump to conclusions about what the solution should be. By the time they decide to call in outside help, they’ve already stirred things up in the organization two or three times, no one is really sure what the problems are, people are tired of having the rug pulled out from under them, and everyone is in a “resistant” mood. (You may have noticed that when you go into a client organization for the first time, you often see the reaction, “Oh, no, here we go again! Our manager must have read another magazine article.”)

The buzzword problem

I’ve observed that “resistance” is often a reaction to buzzwords. The buzzwords that many IT consultants and coaches use to promote and describe their work tend to raise red flags in people’s minds. The consultant or coach may think that a given buzzword denotes Good Things, and expects it to be received with joy. People have their own preconceptions, assumptions, and memories of negative outcomes associated with the buzzwords. When people react negatively, the consultant or coach may be surprised and unprepared to cope with unexpected and seemingly irrational “resistance.”

At that point, the consultant or coach faces a long, touchy, and possibly doomed effort to clarify definitions in the face of suspicion, doubt, and uncertainty. I’ve found that the simple expedient of avoiding popular buzzwords side-steps that difficulty. If your path is blocked by a big rock, you can either climb over it or walk around it. I prefer to walk around it. I focus on the client’s goals, problems, and pain points. Useful methods and practices fall out naturally from an exploration of solutions to specific problems. There’s no need to label the solutions with popular buzzwords. It boils down to: (1) Where are you trying to go? (2) What is standing in your way? (3) What can we do about it?

Rather than walking in with a big sack of magic tricks, I think it’s better first to learn a bit about the environment, the people, the organization, the consumers of the software, the overall business context, and the prevailing technological constraints. That means doing a lot of listening and observing initially, and proportionally less talking and recommending. I’ve found this approach also helps to alleviate the initial defensiveness that people feel when management throws a “coach” at them to tell them all the things they’re doing “wrong.”

Of course, I can’t tell you exactly how to do this. That will depend on your personality and style, your experiences and background, your preferred methods and tools, and the particular circumstances of each client. I can share my own approach, for what it’s worth.

1. Clarify the mission

In my experience, most clients begin with either (a) a wild guess about some specific Thing they think they want, like “implement Scrum” or “coach the testers,” or (b) no idea at all what their problems are, and they say things like “we want to do better” or “we want to be more productive.” If you will allow me to oversimplify and generalize, I’ll say the basic reason for this is that the people don’t have a clear definition of their own mission.

When we are engaged as consultants, trainers, or coaches, we are supposed to be helping to solve problems. In order to identify an appropriate solution, we have to understand exactly what the problem is. A solution without a problem is no solution at all. So, what is a problem? As I see it, a problem is anything that stands between you and your goal. Therefore, by definition, if you have no goals you have no problems, and if you have no problems you need no help.

When I see that the client doesn’t have a clear idea of their own mission, I usually start the engagement (or more likely, a pre-engagement assessment or conference call) by asking them to come up with some sort of mission statement. What I want is for everyone to aim for the same target or at least face in the same direction. It will be the basis for judging whether something is “good” or “bad” in the client’s context, and whether any change actually results in improvement.

Their first attempt often turns out more-or-less like this: “We build great software and serve our customers.” Then I try to help them refine the statement by asking them to quantify the words. I’m really more interested in getting them to think concretely about what they want than I am in crafting a cool mission statement.

In the ensuing conversation, I try to get them to think of more specific adjectives. What is “great software,” exactly? The most impressive business CRUD app ever written? Probably not. What is “serve our customers?” Give them a menu and tell them about the daily specials? Cut them into pieces and toss them into the lion enclosure? Probably not.

Usually people are thinking of factors like business value add, reliable systems, timely delivery, consistent delivery, predictable delivery, and sustainable delivery. Terms like those give us an idea of the kinds of measurements we will need in order to understand the current state and to quantify the impact of any changes we make, as well as offering a slightly more concrete sense of the organization’s misson.

Another angle I want to be sure to introduce early in the relationship is the idea of continuous improvement. Since they have decided to engage outside help, it’s already clear that they are interested in improvement. Building on that, I ask them if it would be useful to create a working environment characterized by double-loop learning, creativity, innovation, and professional growth opportunities for the staff. This speaks to the idea of helping clients become self-sufficient in carrying new ideas forward. I want to build that idea directly into the improvement plan. I haven’t yet heard anyone say “no” to that.

So far, we’ve gone from something like…

We build great software and serve our customers.

…to something like…

We encourage and support innovation, creativity, professional growth, and continuous improvement while delivering reliable, high-quality software that adds business value for our customers in a consistent, predictable, and sustainable manner.

This sort of formulation usually pleases the client. They feel better about where they’re going and everyone is starting to understand what needs to happen in the course of the engagement. The best part is that it only takes a few minutes. Yes, the wording is still vague, but not as vague as “great” and “serve.” As a mission statement, it’s okay that the wording is a little loose. It isn’t meant to be a cookbook; it’s meant to be a flag stuck in the ground up ahead, so we can see where we’re trying to go. We need that, as the terrain and the weather are going to get pretty rough along the way.

Please note that I haven’t mentioned any buzzwords up to now. I left my bag of magic tricks in my other pants. If the client mentions buzzwords, my approach is to suggest that we focus on outcomes and not worry about specific methodologies or practices just yet. If they insist on talking about, say, “agile” or “lean,” I’ll acknowledge that those schools of thought offer a lot of good ideas, and we might discover some of them will be useful to us, but first we have to find out what we really need to do.

2. Assess the current state

The next step is to assess the current state of the organization to see whether the assertions in the mission statement are already objectively true. If they are, then all is well and the client doesn’t need any help. (I haven’t seen this happen to date, but anything is possible.)

There are three main reasons to assess the current state before proceeding with any recommendations for change:

  1. It helps us refine the parameters of the engagement — objectives, milestones, measurements, timelines. The initial parameters tend to be limited to the early stages of the engagement, such as defining the deliverables of the assessment itself. Going forward, we need to be on the same page about how progress will be measured.
  2. It provides a baseline against which to compare the performance of the organization as we make changes, so we can tell whether the changes are really “improvements.”
  3. It helps everyone involved understand why there is interest in improvement, and why outside help has been engaged. That is, it helps avoid the “Oh, no, here we go again!” issue.

There are many ways to assess an organization’s operations and procedures. My preference is to focus on delivery performance rather than on whether the group uses any specific techniques or practices. At this stage I want to understand the current process and also identify the root causes of any weak areas in the process. I tend to favor a short list of tools that I have found useful for this sort of analysis. I may change this list as I learn about additional techniques and tools, but as this is written these are the tools I tend to rely on the most for this purpose:

  • Value Stream Map
  • Causal Loop Diagram (also called Diagram of Effects)
  • Current Reality Tree
  • Mind Map

Your approach may differ, but my approach emphasizes direct interaction with individuals in the organization. I sit in with people in various roles and observe how they work. Trying not to interfere with their work flow, I ask questions to elicit information about how they work and why they work in the way they do. Throughout all this, I’m making notes to apply to the analysis tools I’ve chosen for the particular situation, and to connect dots that provide information about the informal networks and information flows in the organization. The way people actually work is more significant than the formal, documented process they are pretending to follow; although when there is a large disconnect between the two, that in itself can be significant.

Apart from the mechanics of the process, I also look at human factors. Do team members seem stressed out? Nervous? Evasive? What do they enjoy about their jobs? What do they dislike about their jobs? Often, a rough section of the delivery process can result in conflict or stress, and people aren’t aware of the cause. Smoothing out the process can sometimes eliminate sources of conflict and stress. The stress itself may be a symptom we can use to identify the problem.

At the management level in particular, I look for evidence of their mindset about time. Do managers focus on maximizing individual resource utilization, or do they focus on end-to-end effective delivery? There is also the question of budgeting or funding. How is work planned, funded, and tracked? Time management and financial management can enable a team to be flexible and responsive, or absolutely prevent it from being so, no matter what process framework and technical practices they use. A third aspect of management is the way ideas and initiatives flow from the business strategic planning level to the IT portfolio level, or to individual projects. Poor management in this area can result in thrashing or constant changes in priority or direction at ground level. If there are problems in these areas, we must either address them early or reduce management’s expectations of the potential benefits of any improvements in process or technical practices.

Finally, I look for structural problems in the organization. Sometimes, the way various job functions are organized creates friction, conflict, or redundant effort. There may be functional silos that do not cooperate with one another. Management typically hires a consultant or coach to improve the software develoment group because that is where the symptoms of problems are manifest. Quite often, the root causes of problems experienced by software development groups lie elsewhere in the organization. If this is the case, we need to know it in order to craft a roadmap for improvement that will actually help. (Have you ever heard someone say, “We tried [insert favorite Thing here] and it didn’t work?” They may have treated the symptoms and ignored the disease. “It” may have worked just fine; “it” just wasn’t the solution to the actual problem, so people still had the same pain points as before.)

Based on my assessment of the current state, I identify metrics that will provide useful information in the context of this particular organization. Because these engagements involve changing the way work and information flow through the organization, I avoid metrics that depend on any particular process model. To track the effectiveness of process improvements, we need metrics that yield comparable information as the process model evolves. In my experience to date, the metrics that are agnostic about the process model come from the Lean school of thought. Now I’m reaching into my bag of tricks, but I still avoid using any Lean buzzwords. I just explain how the metrics work and what they mean.

3. Define a roadmap for improvement

My thinking about process improvement is informed by three personal predispositions I have. Your approach may differ, and that is fine.

  • I like the metaphor of the chain. Each link in the chain has a different strength. The strength of the chain as a whole is equal to the strength of its weakest link. To strengthen the chain, we must strengthen its weakest link. If we strengthen some other link, we will have no effect on the strength of the chain as a whole. I don’t want the outcome to be, “We tried [insert favorite Thing here] and it didn’t work.”
  • I favor the idea of continuous, incremental improvement. That means a roadmap for improvement will be based largely on some flavor of plan-do-check-act (PDCA). I don’t think a single seagull-style intervention can result in useful and sticky improvements. The bag of tricks contains various PDCA techniques originating from various schools of thought and process improvement methods.
  • I agree with a piece of advice from Corey Ladas, who pointed out that we need to stabilize the current process, then introduce a change, then stabilize the revised process, then introduce the next change, and so forth. When we just throw in changes helter skelter, the result is chaos, not “improvement.”
    That said, I do understand that radical change is sometimes the way to go; however, as a general rule incremental change seems to be a less disruptive and more effective approach. Radical change can be effective when the organization is really in bad shape and has nothing to lose by disrupting its delivery chain in order to implement a completely different structure and process model. In most cases, however, the organization has to continue delivering at close to its normal rate even while implementing a program of improvement.

Because of these predispositions, the improvement roadmap will begin by focusing on the area that appears to be the weakest link in the organization’s delivery chain. If it happens to be some mechanical part of the process, it will be called out by the value stream map(s). If it happens to be a mindset problem, then it will be identified by personal observations of the way people interact, the expectations they have, and the metrics they choose. Having identified the root cause(s) of the weakest-link problem, we address those causes. The root causes may be identified by the Current Reality Tree, Causal Loop Diagrams, Mind Maps, or other tools, as well as direct observation. The specific remedies for the root causes come from the bag of tricks, although even now we can usually avoid using any buzzwords that might cause confusion or “resistance” based on people’s preconceptions, assumptions, or previous negative experiences. Then we let the revised process stabilize. At that point, we re-assess and identify the (possibly new and unpredicted) weakest link. We keep going forward iteratively to effect continuous, incremental improvement, using our metrics to check progress against the original goals. The bulk of the engagement will consist of executing (and progressively improving) this plan.

Some tools I find helpful at this stage are:

  • Five Focusing Steps (from Theory of Constraints)
  • DMAIC (from Six Sigma)
  • Intermediate Objectives Map
  • Evaporating Cloud
  • Future Reality Tree
  • Prerequisite and Transition Trees
  • Goal-based planning (working backward from a goal through its prerequisites to the present)
  • The A3 method

We might determine that certain training classes or workshops are advisable, and that the content has to be tailored for the particular client. It is also necessary to understand the effects of various methods, techniques, and practices. When we see a particular sort of problem, we need to be able to recognize which specific technique or practice is likely to alleviate the problem. There are no tools or frameworks or formal academic studies that can tell you this; it is a matter of experience. The analysis tools can help you determine where improvements are needed, but only your field experience can tell you how to effect the improvements. There’s no practical way to summarize this, as specific improvements may involve management practices, process model changes, and technical practices in analysis, testing, architecture, programming, database administration, network administration, security, user experience, interface design, and a range of other disciplines. The details depend on exactly what the particular organization needs. In my experience, the needs of a client organization may span a wider range of disciplines than I can address personally. In those cases, I will recommend others in the field whom I know to be proficient in specific areas.

4. Establish measurements

Before we can follow the roadmap, we have to know where we are at the outset, and we have to be able to track progress and recognize any wrong turns we might make along the way. That’s what the metrics are for.

Identifying metrics isn’t the same thing as establishing measurements. Now that we know which metrics we wish to track, we need to ensure that the relevant information is populated in the normal course of events in the work flow. The first direct change to the process is to plug new metrics into it. Everyone involved needs to understand what the metrics mean, how to provide input into them, and how they relate to ongoing process improvement in the organization.

Here I mean metrics we can use to monitor our progress in the improvement initiative itself. There will probably be additional new metrics that we introduce along with changes in the process model. The former must be process-agnostic, and the latter can be process-specific.

5. Execute

Lather, rinse, repeat:

  1. Stabilize the process
  2. Introduce a change aimed at the weakest link
  3. Measure and track
  4. If things are worse, reverse the change
  5. If things are better, re-assess to identify the new weakest link
  6. Are we using the right metrics? Change if necessary

Summary

When we go into an engagement with the idea of implementing our favorite Thing, and not with the idea of helping the client solve their problems and achieve their goals, we’re likely to encounter resistance. Even if we have great faith in the Thing we wish to implement, we will do better for our clients and for ourselves if we first understand what problems need to be solved, and then apply the solutions that fit the problems. We will find many cases when our favorite Thing actually is a good solution to the client’s real problems, so there’s no need to force the Thing into situations where it doesn’t naturally fit.

I shared my own general approach just as an example. I don’t claim that this is the best or only approach. You probably have your own way of doing things. The key point is that we all should focus on the needs of the client and let that drive our consulting and coaching advice. It’s okay to have a favorite Thing. It’s not okay to assume it will cure all ills. When we come rushing at a client swinging our Thing like Thor’s hammer, it will only frighten them.