Posted on

My personal agile journey

This is a sort of trip report. I started on a journey in 2002, and in the ensuing 10 years traversed a lot of territory, met many interesting people, and learned a great deal. The journey certainly changed me. Whether the change is for the better is still an open question. I’m talking about my journey with, alongside, around, and sometimes against the grain of the "agile" movement.

With the Agile 2012 conference just around the corner, I thought this might be an appropriate time for a personal retrospective. I’ve presented at the last five consecutive Agile conferences, and found them to be enriching experiences. This year’s event takes place about 2 miles from my former home near Dallas, Texas. It would be great to see the old familiar places and visit old friends in the area. It would be great, as well, to show some of the friends I’ve made on my "agile" journey around the town.

I won’t be there.

My "agile" journey began in 2002. In 2001 I accepted a job as an enterprise architect at a medium-sized US financial holding company that owned banks, mortgage companies, investment companies, and the like. The total size of the corporation was around 33,000 and the IT department had about 1,300 people, including some 300-400 software development people (inclusive of all roles). The place was a bureaucratic nightmare, and the formal delivery process appeared to have been designed intentionally to burn up as much budget as possible while preventing anything whatsoever from happening. I didn’t know anything about Lean thinking at that time, but in hindsight I would guess the process cycle efficiency of their software development process was around 1% on a good day…and they didn’t have many good days.

At the time I had about 24 years total experience in IT and had been a contractor and consultant for the previous 17 years. I had done a range of different things from writing code to managing multiple projects to managing a business unit that did technical training and provided a lab environment for a consulting firm. I had seen the insides of around 30 companies before I ever heard of "agile." I thought of myself as fairly senior. Today I look back on that experience and it seems like someone else’s life.

By 2002 I was ready to quit. I don’t mean quit the job; I mean quit the IT field. It had devolved into nothing more than a soul-crushing bureaucracy. In my job, I had no contact with stakeholders or any downstream consumers of my work products. I had no idea whether the things I did added any value to anyone, or if people downstream from me had to re-work everything I had handed off. I felt isolated and disconnected from the business and from the people who used my output. I knew that many managers in the lines of business were frustrated with the whole routine. They couldn’t get anything out of the IT department in a reasonable time or at reasonable cost. In my experience up to that time, this was what corporate IT was all about. I could change companies, but that wouldn’t change the nature of IT work. It wasn’t worth getting up in the morning anymore.

My wife and I were looking into small business franchise opportunities. We considered opening a box-and-mail storefront, or a venetian blinds store, or one of those places where kids go to play arcade games, bowl, and have birthday parties. Nothing super-interesting, I guess, but at least nothing soul-crushing, either.

It was at that time a colleague approached me with an unusual proposition. He said he knew I was thinking of quitting, and he asked me to join him in a project he was setting up before I did so. He wanted to examine the IT function of the business in detail, understand its costs and the value it brought to the enterprise, and then devise ways to improve it. A mid-level manager in the IT department had agreed to provide "air cover," which meant that if anyone asked how we were spending our time, he would tell them we were on a project of his.

If we were ultimately successful, then the whole thing would have been his idea; otherwise, he would disclaim any knowledge of what we were up to. He reasoned that if we were successful, he could ride that success to a big career boost; and he could hedge his bet in case things didn’t work out well. It sounded intriguing. It was politically risky to do something like that in a company that operated as that one did, but at that stage I was beyond caring about political risk. I joined the group of six and we embarked on the covert mission to fix enterprise IT.

Before long we determined that one of the core problems with the delivery process was the large number of narrowly-defined functional silos, the formal sign-offs between the silos, and the cumbersome system of quality gates and approvals. The people who created work products were assigned to as many as 30 projects concurrently, depending on role. All they could do was to copy and modify boilerplate artifacts as fast as possible and hand them down the line.

The people who signed things off had no idea what they were signing off, and so every quality gate was an exercise in blame-shifting and theatrical displays of "caring," culminating in sign-off "with reservations" and a show of phony reluctance. We further determined that a key reason no one understood anyone’s business needs or had any feedback about results was that no one had an overarching view of any initiative. Everyone’s view of the world was limited by the functional silo within which they worked.

In fact, if you examined the various documentation artifacts closely, you would find them to be incomplete, inconsistent, and contradictory. In some cases they were obviously wrong, and in some cases the authors didn’t bother to cover their copy-and-paste tracks at all. It didn’t make any difference. No one ever received any feedback about work that they had handed off.

The IT portfolio was managed in this way: In January, 300-odd projects were launched simultaneously. They raced to see which ones would be canceled first. In a given year, about 70% of the projects that were started in January were never finished.

We figured it would be better if a single team that included people with all the skills necessary for delivery took ownership of an initiative from inception through production support, and that the team would just focus on the one initiative and not be assigned to 30 projects at a time. That way, a group of people with shared responsibility and accountability would have all the information they needed to understand the effectiveness of the solution and of the delivery process itself, could focus on delivery of just the one solution, and would be in a position to ensure stakeholders received the benefits they were paying for. We also reasoned that if key stakeholders were involved throughout the project, they could more effectively steer the solution to align with their needs, rather than depending on an abstract description of requirements written in advance. Furthermore, we reasoned that if the organization would start, say, 10 to 15 projects in January and get them done, then they would have resources available to tackle the next 10 to 15 projects in the portfolio, and so forth. This could enable the department to complete a higher percentage of the initiatives in the portfolio.

Recognize any of that? Sure, I knew you would.

These things seemed sensible to us. We had never heard of "agile" or "lean." They just seemed sensible on the merits. The problem was that this approach violated every written and unwritten rule in the organization. We needed a project sponsor who was willing to do something "on the side."

We found a line of business manager who had an idea for a business initiative that required IT support, and who was willing to take a chance on this unproven approach, working under the radar and going around the formal IT process. She controlled enough budget to fund a small project without attracting the wrong sort of attention from the bureaucracy. Some of the guys on the technical staff had stashed decommissioned servers and other equipment under their desks over the years, just in case they might need them someday. "Someday" had arrived. We had to get someone involved from the infrastructure group because we needed a Blackberry server. He thought it was cool to be part of a clandestine operation. We started working on her project.

It was only then that one of us discovered the Agile Manifesto. He showed it to the rest of us, and we immediately realized it stated the same values and principles that we were coming to see as critical success factors for effective software delivery in a corporate IT environment. We reasoned that if people had gone to the trouble to write all this down, then there must be others out in the world who already knew how to approach the work in this sensible way. Rather than trying to re-invent the wheel, we looked for help. We found ThoughtWorks.

ThoughtWorks had a proven method for approaching software development in this way. We asked them to teach us their method. They explained that wasn’t their business model. They just placed teams of their own people at clients to carry out specific projects. Then the teams moved on to other assignments. After some back and forth, we agreed that we would create a project team comprising 50% ThoughtWorkers and 50% our own people so that we could start to learn their method. There were some hiccups as we filtered through a few ThoughtWorkers before we ended up with individuals who were interested in mentoring us as well as in delivering code. Then we were good to go. That’s when I learned test-driven development, continuous integration, pair programming, and all the rest of it.

One thing we took care to do was to measure the relative value delivered by the nascent "agile" process as compared with the established process. We did this by having each project sponsor submit a request to the IT department through the normal channels. This resulted in a bid or estimate from the IT department for the proposed project. We used that as the baseline to compare the costs and benefits we actually delivered with the new process. It was a comparison of an estimate with an actual result, but it was as close to apples and apples as we could come. If anything the actual result from the traditional process would have cost more and taken longer than they estimated. That was the typical pattern historically, anyway.

Over the course of a dozen or so small-scale projects, we delivered on average 9x the return on investment that the IT department estimated it could have delivered. One project sponsor remarked (in an executive team meeting), "I’ve been working with IT departments for 32 years, and this is the first methodology I’ve seen that worked."

There were a couple of outliers.

On the positive side, one small project delivered 27x the estimated ROI. This was mainly due to the fact the IT department padded their estimate to cover possible challenges in implementing a new technology, while we just went ahead and did it. It was only "new" to this company; the technology was already in use in thousands of other firms around the world. It wasn’t really a "risk." The IT department just didn’t want to get blamed for anything, so they padded. It helped our numbers.

The outlier on the negative side was a large-scale development project. It was our first attempt at scaling agile beyond a single team, and we stumbled. We tried to scale by adding individuals to a single team. It caused a lot of friction and stress and team members were relieved when the project was over. We learned that the way to scale "agile" development is to use multiple small teams. That project only delivered 4x the ROI that the IT department estimated they could have delivered with their process. In other words, by far the worst result we achieved was 4x better than the IT department believed was possible.

By the end of the three-year growth period, IT management had decided the new approach was worthwhile. They took ownership of it, labeled the traditional procedures with "agile" buzzwords, dismantled the agile group, and pissed everyone off. Within 6 months, only 4 of the 60-70 people who had taken part in the "agile" initiative were still employed at the company. After another 6 months, the IT managers had "leveraged" their "agile expertise" to land better-paying jobs elsewhere. All institutional knowledge of lightweight methods was lost.

Personally, I used the three-year growth period as a sort of "university of agile." Impressed with the results from our first pilot project, I set out to learn the method in depth. I went out of my way to be assigned in each of the roles on at least one real project. This was feasible for all the technical roles, as they were all under the same management hierarchy. That covered architect, team lead, programmer, and tester.

Getting assigned as a business analyst proved to be a political challenge, as they were under a different management hierarchy. The business analysis group was very closed and secretive. When I finally managed to land an assignment as an analyst, the other analysts were extremely suspicious. Why would I want to move from programming to analysis? they wondered. Was I up to something devious? It turned out that they had good reason to be secretive about their work. They had no idea how to "analyze" anything. They spent all their time formatting documents and diagrams. For instance, we spent three hours in a meeting to decide which fonts to use for User Stories.

In one case, the analysts had spent four months charting out swim lane diagrams and so forth for a proposed system to support the sale of vehicles the bank had claimed as collateral for auto loans in default. At the time, there was a dip in the economy and a lot of people defaulted on auto loans. Most of them declared bankruptcy and would be out of the credit market for the next seven years. Others learned a lesson and weren’t interested in high-end loans for cars. There was no need for a new system. It was only a temporary bump in workload for the collateral sales department. I suggested the manager of that department should just hire a temp for a few weeks to put the cars on eBay. Problem solved. Savings: $8 million. The analyst group hated me viscerally for that.

In another case, the analysts had proposed that a certain department implement an enterprise-class fax system to handle inbound loan documents. They had spent four months charting out swim lane diagrams and so forth for the new solution. When I visited the work site, I learned that relatively few documents arrived via fax. Of those, all were received on the same fax machine and picked up by the same middle-aged woman. She had to walk about 50 feet to the fax machine and 50 feet back to her desk a couple of times daily. That was the business problem to be solved. I unplugged the fax machine, placed it on her credenza, and plugged it in. Problem solved. Savings: $2 million. The analyst group hated me viscerally for that.

In a third case, the analysts spent four months charting out swim lane diagrams and so forth that ostensibly described a proposed new solution, but turned out to be a description of the current state. Observing one of the workers using the old system, I noticed that she typed a number from a query in a Windows-fronted application into a field in a CICS application, ran a query on that application, and typed the result into another field in the first application. I asked her if it would be easier for her if the system just went and got that value under the covers for her.

She was surprised that such a thing was possible. The analysts hadn’t even thought of it, in four months of "analysis." We had WebMethods in house, and I had recently done something similar on another project, in the role of programmer. I knew it would not increase scope at all. It was a half-hour task. The users estimated it would save them 30 seconds per document they processed, times 200 documents per day, times 40 workers. That’s 4,000 person-minutes per day, or one million person-minutes per year, at an average fully-burdened employee cost of $9/hour. Savings: $150,000 annually. The analyst group hated me viscerally for that.

There were many other instructive experiences, as well. After the first few projects I became one of the internal people who helped new teams ramp up and learn the method. It seemed natural as I had direct experience in each role on an "agile" team. That was the time when I discovered my love for helping others excel; it was the beginning of my reinvention as a coach and mentor.

I visited one team that was mid-way through a project, and discovered that one pair of programmers had been working together non-stop for several days. That raised a flag in my mind. Why had they not switched partners in all that time? Why had no one mentioned it to them? They were stuck on a front-end development challenge. At every daily stand-up, they reported that they believed they could finish the task by the end of the day. Finally, they delivered something, it passed QA testing, and the Product Owner accepted it.

They delivered a solution after some three weeks of thrashing. The UI presented a list of selections. When the user made a selection from the list, a second list was populated with relevant choices. When the user made a selection from that list, a third list was populated. In the solution the pair delivered while I was there, every time you clicked on a checkbox, the screen went completely blank for a quarter of a second. It turned out that they were sending a request to the server on every click.

I chastised everyone. The pair that got stuck never asked for help. Two other team members who were proficient at front-end development never stepped up to offer help. The PM never questioned the first pair about dragging the User Story on for days. The analysts never included non-functional requirements. The testers didn’t check for usability or non-functional requirements. The Product Owner used business software packages all the time and should have known better than to accept the solution with that odd behavior.

The analysts’ and testers’ excuse was that the canonical User Story format doesn’t say anything about non-functional requirements. The unhelpful teammates’ excuse was that they were interested in learning to work on the back-end code and didn’t want to bother with JavaScript. The stuck pair’s excuse was that it was "impossible" to test-drive JavaScript and "impossible" to handle that UI behavior entirely in the browser. The Product Owner’s excuse was that she didn’t understand the technology well enough to argue with the programmers. The PM’s excuse was that he thought the same pair should stick together until they figured out the problem without assistance, as it was "their" problem. I explained that the team as a whole was accountable for results, and that supporting non-functional requirements is a basic requirement of the job even if their favorite "agile" comic book didn’t have a picture of a happy dancing clown doing it.

I sat down then and there, located and downloaded a JavaScript unit testing framework, and coded the solution three different ways. It took about 30 minutes. I demonstrated the options to the Product Owner and asked her to choose which UI style suited her; checkboxes, radio groups, or drop-downs. The lists were short and the values were static, so there was no need for any complexities such as Ajax calls or magic widgets that populate themselves. It took another hour to get the code integrated and all tests passing. The analysts hated me viscerally for it. The testers hated me viscerally for it. The programmers hated me viscerally for it. The PM hated me viscerally for it. The Product Owner thanked me.

On another project, the PM approached me excitedly one day to show me a query he had developed for Visual Studio Team Server. He could track the fact that Roger had worked on User Story #129 on Wednesday. So what? I asked. Well, in Tuesday’s stand-up Roger had claimed he would work on User Story #129 on Tuesday. He didn’t work on it until Wednesday. Did the team as a whole deliver on its iteration commitment? I asked. Yes, he replied. So, there’s no problem? No, there’s no problem. Why are you tracking that information, then? Well, somebody might want to know someday. You mean, five years after the project is done, somebody will want to know why Roger worked on Story 129 on a Wednesday instead of a Tuesday, even though it had no impact on anything? What sort of business decision would that information support? Give me an example. The PM turned a lovely shade of red. He turned and strode away, receding into the distance like a beautiful red sunset. He didn’t look me in the eye or speak to me for the next six months.

You might point out that isn’t a particularly empathetic coaching style. I told you, that was when I was first learning. I’ve come a long way since then. I’m empathetic as hell now, dammit! I just don’t suffer fools gladly, that’s all.

Those early "agile" experiences conditioned my thinking about the approach in certain ways. It is a very different introduction to "agile" than the majority of people in the community have had. Most of them started with small projects in a small setting. The idea of extending "agile" to the enterprise has been a challenge throughout the "agile" movement, and remains a challenge to this day. There are books and roadmaps and frameworks and what-not for scaling "agile." Corporate IT is the environment where I learned this stuff. It seems perfectly normal to use "agile" methods "at scale." When people ask whether "agile" can possibly work in that environment, it always strikes me as a very strange question. With my background in enterprise architecture, questions about whether "agile" can work for architectural work also strike me as very strange. It’s hard to imagine it not working.

After IT management took ownership of the "agile" practice and destroyed it, I looked for work doing what I had discovered to be my passion: Helping others excel. I paid my own way to a CSM course taught by Ken Schwaber, to help qualify myself for the kind of job I wanted. I joined a company called Valtech Technologies that was getting into "agile" transformation and coaching pretty heavily at the time. That was mid-2006. That was the time I started to get involved with the "agile" community, too. I gave an experience report about the company described above at XP Days London, and a talk on quantifying the business value of "agile" methods with Jurgen Ahting at XP Days Germany. Since then I’ve participated in conferences and user group meetings whenever I could, although clients tend not to appreciate absences from work for such things.

I had a great time coaching, delivering, and learning with Valtech. In 2008, their US operation overreached itself in the marketing area, spending a lot on marketing events that didn’t generate the hoped-for sales leads. By fourth quarter, the company had no cushion to absorb the blow of the financial crisis. When clients cancelled initiatives and looked for places to hide until the economy rebounded, Valtech US laid of 20-25% of its workforce, including me. I was in the process of being picked up by Valtech UK when the ripple from the US crash hit Europe. Valtech UK went into layoff mode, too.

I’ve stayed as independent as possible since 2008, and until 2012 didn’t really have much difficulty finding work. It was during these years that I started to take coaching more seriously, and attended coaching camps and seminars to improve my "people skills." I incorporated Lean principles and the fledgling Kanban method into my work as a consultant, trainer, and coach, and I’ve found it to be extremely powerful and beneficial.

I became, if I may say so, pretty good at coaching teams and helping delivery organizations improve their effectiveness. In doing so, I discovered another learning opportunity. Most of the emphasis in "agile" literature and presentations has to do with introducing the method to people who are unfamiliar with it in organizations that have never tried it. Most of those who do coaching work in this area have a lot of experience doing that, but little experience with what happens when you go beyond the introductory level.

Two or three of the teams I’ve coached improved themselves to such an extent that they pushed the edge of the envelope with regard to some of the canonical "agile" practices. I’ve heard it said that you really can’t know how far is too far until you go there. I can tell you it’s possible to go too far with "agile." When a team becomes so proficient at delivery that it is out of sync with the rest of the organization, it causes friction. Organizational friction leads to undesirable outcomes.

I’ve learned that it is important to pace the improvements in process and in technical practices so that the whole organization improves together. Otherwise, the high-performing team will not last long, and the people will become disillusioned with the whole thing. Why have I experienced this with only two or three teams? Because I learned my lesson. Now, I hold some teams back and encourage others to move ahead, trying to keep improvement synchronized across the organization.

It was also from 2008-2012 that I became aware of Lean thinking and found it to be a very useful school of thought for the goal of effective delivery of value. I’ve gotten more and more interested in this school of thought, as well as Systems Thinking, Beyond Budgeting, Real Options, and other practical approaches to delivering value. And that’s where I started to run afoul of the "agile" community. They don’t like to hear "agile" characterized as one school of thought among many that might be useful under certain circumstances. Them’s fightin’ words.

One area I became interested in from 2008 onward is metrics. I’m very interested in process improvement, and I think measurement is important. I might even go so far as to say that when changing a process, if you don’t have measurements, you don’t have anything. I wanted to learn how to measure effectively, both for steering initiatives and for guiding continuous improvement.

The more I looked into this, the more I realized that few people in the IT industry (not just the "agile" community) have paid much attention to metrics during their careers. I started to share what I was learning on the job at conferences and user group meetings, and was surprised at the strongly positive response. At one of the Agile conferences, my session on metrics was standing room only, with people standing along the walls and crowded around the door of the uncomfortably warm room. When the overhead projector failed, I continued the talk using flip charts, and nobody left. The organizers had to throw us out of the room when we went over time.

I got a similar response giving a talk about metrics to meetings of the APLN in Washington, DC, and Oklahoma City, OK. Both groups asked for repeat appearances. Both groups wanted a follow-up session going into more depth. I did a webinar on the subject, and although it’s doesn’t have high production values, people bought recordings of it for the next couple of years. At the moment I’m working on a book about software development metrics. Metrics seems to be a topic of interest, although there is nothing sexy or fascinating about it. It just happens to be a critical success factor. It also happens to be something the "agile" community just doesn’t seem to focus on. It’s ironic, as some of the most useful metrics for adaptive development were devised to support "agile" methods; things like velocity and earned business value, for instance. Strangely, the majority of agile practitioners don’t really understand how to use those metrics.

And that’s another way I’ve run afoul of the "agile" community recently; at least, the US community. Combining my interest in Lean/Kanban and metrics, I proposed a talk this year about how to extract useful metrics from Kanban boards and use them to guide continuous improvement. I’ve presented the same material in the form of a training class to clients, with very positive feedback. The talk has been accepted at one conference later in 2012 and is receiving positive feedback by reviewers for a second, but it has not been accepted at any US conference. Feedback on the submission to Agile 2012 suggests that the organizers just don’t see any value in spending time on metrics. I suspect this is a corollary to the stereotypical "agile" view that management in all its manifestations is the spawn of the devil. Metrics is seen as a management thing, so there you have it. Besides, metrics are boring. They’re clearly more boring than sticking colorful slips of paper to the wall and thinking up clever team names. (By the way, one team that I coached wanted to call themselves Sharks With Frickin’ Laser Beams, and the PM wouldn’t allow it. No sense of humor, I guess.)

Here are some lessons I’ve learned in my journey so far:

  1. When changing a process, if you don’t have measurements, you don’t have anything.
  2. Be sure you are measuring meaningful things. Don’t waste time measuring useless things. Life’s too short. So is your project.
  3. Organizational structure has more to do with positive/negative outcomes than any other factor, including process framework and technical practices.
  4. Business people are already "agile" because they are responsible for revenue generation. IT managers are a stumbling block because they are responsible for burning down a budget someone else gives them.
  5. There’s more bang for the buck in scaling down than scaling up.
  6. Scale down by decomposing programs into smaller projects if you can.
  7. Scale up by defining workstreams to be as independent as possible and forming multiple small teams rather than one large team.
  8. If you build larger and larger teams, they will tend to split apart naturally when they reach a size of 12-15 people. You might as well do that on purpose rather than waiting until it happens to you in an uncontrolled way.
  9. Throwing a bunch of people into a room doesn’t make them a "team."
  10. A lightweight management process doesn’t justify a lack of engineering rigor at ground level.
  11. At least one side of the Iron Triangle (scope, schedule, budget) is going to move no matter how carefully you plan. You might as well control that movement rather than waiting until it happens to you in an uncontrolled way.
  12. Don’t mix infrastructure development with application development in the same project.
  13. There’s no such thing as a "re-write." Don’t kid yourself.
  14. When changing a process, if you don’t have measurements, you don’t have anything.
  15. The ScrumMaster role (by whatever name) is not the same as the Project Manager role. PMs will not evolve into SMs, despite some claims (or hopes).
  16. The Generalizing Specialist model is not that difficult. For typical business application development, any reasonably intelligent person can perform the work of analysis, programming, testing, project management, and the less-specialized tasks of database administrators, network administrators, and others. Of those roles, all require specific skills that have to be learned, programming (in which I include technical design) has the most challenging learning curve, and none is out of reach for a person who really wants to try. Most people just don’t want to try. They’d rather spend the day doing whichever tiny slice of development work happens to be the most fun for them, so they act as if that slice is a highly arcane specialization in its own right and no one else could possibly comprehend it short of a gazillion years experience. That’s bullshit, but I’ve learned to leave people alone and let them enjoy their jobs. That’s more important than maximizing mechanical efficiency.
  17. Regardless of team organization (hierarchical vs. self-organizing), whoever has the responsibility to make decisions must develop the knowledge and skill to make the right decisions. Whether a team lead has that responsibility or it’s a group responsibility, good decisions are not automatic. Like any other skill, it takes guidance and practice to develop.
  18. Self-organization doesn’t automatically mean a team of peers. Team members may recognize that their skills are unequal, and choose a lead.
  19. Hierarchical team organization doesn’t automatically preclude a team of peers. The lead may recognize that team members are skilled enough to function autonomously.
  20. There are differences between self-management, self-organization, and self-assembly. Self-organization doesn’t mean anarchy.
  21. Some development techniques are useful in nearly all contexts while others are sensitive to context. Dogmatists can’t tell the difference. Therefore, dogmatists are not interesting.
  22. Don’t try to do too many things at the same time.
  23. "Agile" is not an all-or-nothing proposition. Feel free to pick and choose individual ideas and practices from "agile" and other approaches according to your own needs. Just be aware of cases when two practices conflict with one another.
  24. "Agile" is a means to an end, not an end in itself. The end is effective delivery of value through IT work. "Agile" is one school of thought that can be helpful in achieving that end.
  25. Direct collaboration with stakeholders is nothing new. We used to work that way in the 1970s, before IT became bureaucratized. "Agile" is Back to the Future.
  26. When changing a process, if you don’t have measurements, you don’t have anything.
  27. When changing a process, if you don’t have measurements,
  28. you
  29. don’t
  30. have
  31. anything.

As of today, I’m still keenly interested in discovering ways to improve the effectiveness of value delivery through IT work. Meanwhile, the US "agile" community seems to be mainly interested in the Late Majority and Laggard groups, who are in the process of catching up with the rest of the industry by stenciling the word "agile" on their office furniture. To appeal to that market, the US "agile" community mostly offers standardized packages of training and coaching based on first-generation "agile" thinking to help their clients prepare for the year 2001, which might return someday, what with the universe being such a mysterious place, and all. Of course, there are individual exceptions. In contrast, the "agile" communities of Europe and Latin America continue to be open-minded, innovative, and interested in exploring the leading edge of good practices in software delivery.

At the same time, the so-called "post-agile" folks are eager to throw out the "agile" baby with the bathwater, offering absolutely nothing of value as an alternative. I’m not in that community, either.

So my journey continues. It won’t take me to Dallas this year, though. Instead, it’s taking me along the same roads as the Lean/Kanban community and the Stoos movement. I still have my "agile" tools with me. I haven’t abandoned them, but I don’t see every problem as an "agile" nail.

4 thoughts on “My personal agile journey

  1. Dave,

    Thanks a lot for sharing your stories. They really resonate with me. My journey so far has been very similar. From IT architecture background into agile in large financial organizations.

    And I am also one of those heretic practitioners that dares looking in things like Lean Startup and see what it could bring to the agile table. Turned out allmost all agile experts say everything is already there.

    I have checked out mentally of the “agile” community as well. Going to try and get this Stoos movement of the ground.

    Good luck with the rest of your journey.

  2. Congratulations on your journey. I like #13 the most. Also agree with measurement. I prefer metrics like google analytics. If no one is using your product nothing else matters.

  3. Dave,

    Great article! One question – what kinds of measurements do you suggest, when changing a process? In my work, each project is bespoke and bears almost no resemblance to the project before (different team, different team size, different domain, different languages, different contract type and timescale, different sales model), so even meaningfully comparing two projects using what is (supposedly) the _same_ process is difficult…?

    1. David,

      Thanks for the kind words. Some of the metrics commonly used to track software development work are dependent on one or more of the three dimensions of management that I suggested in previous posts (Iron Triangle management, process model, and/or delivery mode). When we change our process, we’ll be changing those things. Therefore, we can’t use those metrics to measure the effectiveness of the change, because they won’t apply to both the old and new processes. So we need metrics that are agnostic about how the work is done.

      I like to use metrics derived from the Lean school of thought for this, as they only measure results and have no dependencies on the details of the process. You can find lots of info online about metrics like throughput, cycle time, lead time, process cycle efficiency, cumulative flow, queue depth, and so forth. You’ll want to take baseline measurements first and then track the effects of each process change you make to be sure the change resulted in improvement.

      You’re looking for how much work is delivered per unit of time (throughput), length of time it takes to deliver a unit of work (cycle time), total time to market (lead time), proportion of lead time in which value is added to the product (process cycle efficiency), where the constraint or bottleneck is in the process, queue depth (accumulated WIP inventory) and so forth. Those metrics mean the same things no matter what sort of process we use or which specific development practices we use.

      You mention comparing projects that have different characteristics, although they use the same general process. Lean metrics are useful for this for the same reasons they’re useful for tracking process improvement. They aren’t dependent on those implementation differences.

      I hope that helps,
      Dave

Leave a Reply

Your email address will not be published. Required fields are marked *