Although “agile” has been around a long time, and Scrum even longer, many large organizations are only now undertaking “agile transformation” initiatives. People in those organizations who have not had experiences elsewhere with “agile” or Scrum are on the learning curve. I’ve noticed a handful of challenges that seem to be fairly common in these organizations.
Maybe it will be helpful to mention some of the common points of confusion and try to clarify them. I hope so, anyway.
What’s the goal?
In large organizations, people in the midst of an “agile transformation” often assume the goal of the transformation is to Do All The Things® “agile” calls for, according to their company’s definition of what “agile” calls for.
In most cases, however, companies that are willing to invest time, money, and effort in changing the way their software development and operations activities are done have experienced negative business impact from the ways those things have been done in the past. The goal of the “transformation” is often to improve time to market, product quality, working conditions, operating costs, customer satisfaction, and the organization’s ability to respond to changes in the market. Some organizations have more tactical goals, such as improving planning predictability or enhancing test and deployment automation.
“Agile” is a means to that end, and not the end in itself. Scrum, SAFe, LeSS, and other process frameworks do not automatically achieve the company’s business goals. They are only tools. So, it will be useful for you to find out what the business goals of the transformation are in your organization.
When you learn to swing a hammer, is it because it’s fun to swing a hammer or because you want to build something? It’s the same with your organization’s “agile transformation.” The point isn’t just to go through the motions of “agile” activities, but rather to achieve certain business goals.
Are “scrum” and “agile” the same thing?
Scrum is a lightweight process framework designed to support any sort of product development. It was defined in the early 1990s based on seminal work published in 1986. It was quickly adopted in the software development world, and is one of the methods considered compatible with “agile” principles as defined in the Agile Manifesto, published in 2001.
“Agile” as such is both broader and less detailed than Scrum. Some call it a “mindset.” In any case, Scrum and “agile” are not just two words for the same thing. Agile is a general approach. Scrum is a specific process. Scrum is the most popular “agile” method owing to its certification program, but it is not the only way to apply “agile” principles. By the same token, carrying out the standard Scrum events by rote without learning to think in an “agile” way will not make your team “agile.”
What does “iterative and incremental” mean?
This is a buzz-phrase you’ll hear often from agilists and Scrum enthusiasts. The word iterative refers to iterative software development. In contrast with linear development (sometimes called “waterfall”), we revisit user needs again and again and gradually build up a useful solution for them.
Each pass through the work is an “iteration.” The advantage over a linear approach is that we learn as we progress, and we can refine the solution to ensure close alignment with user needs, and to avoid over-building a solution with more features than the users want.
What’s a “solution increment?”
The result of each iteration is a solution increment. That’s the “incremental” part. Each solution increment is usable and provides some degree of value to users. We continue to build up the solution incrementally until customers or users are satisfied with it, and everyone feels that we’ve invested enough in it to move on to a different initiative.
In large organizations, this aspect of agile methods can lead to confusion and friction. Often, it just isn’t feasible for an individual team to deliver anything useful to end users or external customers. The friction occurs because Scrum was originally designed to support a single team that can operate autonomously from other teams, and that has ready access to a knowledgeable and empowered decision-maker with responsibility for the business value of the product under development.
That situation is rare in large organizations. It’s far more common to find teams that perform only a subset of the work necessary to move a solution increment forward all the way to production, and/or the team is responsible for only one part of a larger solution. It may take months or years for the organization to reach a point where individual teams have all the skills and system accesses necessary to take an idea from concept to cash, and the technical environment has been set up to enable such teams to operate without dependencies on special teams such as infrastructure engineering or database support. In some cases, it may never happen to the fullest extent agilists would prefer.
What does “inspect and adapt” mean?
Many models for continual or iterative improvement exist, such as Plan-Do-Study-Act (PDSA), the Five Focusing Steps (from Theory of Constraints), or Observe-Orient-Decide-Act (OODA). Scrum’s improvement activity is called inspect and adapt. It means that we observe (inspect) our process and look for ways to improve it (adapt).
The typical way “agile” teams do this is through a formal event, usually called a retrospective, that occurs once per iteration. Teams also inspect and adapt during their daily scrum (or daily stand-up) and their everyday collaboration.
In many large organizations, traditional management approaches have called for teams to learn a prescribed method and apply it rigorously, without variation. In the “agile” world, we want to do the opposite of that. We want to pay attention to how effective our process is for achieving business goals, and frequently look for ways to improve.
Some “agile scaling” frameworks define additional points in the process for inspection and adaptation, such as SAFe’s Innovation and Planning Iteration, also known as an Innovation Sprint. LeSS defines a Team Retrospective and an Overall Retrospective. Your organization may have developed its own unique “agile” process, and it may include similar activities, even if the names differ.
What’s a “time-box?”
A time-box is a fixed time interval in which a given task or set of tasks is to be carried out. This concept seems to be especially confusing for teams in large organizations that are attempting to apply “agile” principles.
Taking Scrum as an example, there are specific events defined in the process that teams are expected to carry out at specific points during each sprint (iteration). All Scrum events are time-boxed.
The sprint itself is time-boxed; Scrum is based on a variant of the iterative process model known as the time-boxed iterative process model. In some earlier methods, such as Spiral or the Rational Unified Process (RUP), iterations could be of variable length, and there was no expectation that anything would be delivered to customers in every iteration. With the time-boxed iterative model, every iteration is the same length and the team is expected to provide end users with something useful in every iteration. Scrum “sprints” are time-boxed iterations.
Every Scrum event within a sprint is also time-boxed. Different teams may define different time intervals for their events, but the principle is that each event is time-boxed. So, a team might decide to run 2-week sprints, and to allow 2 hours per sprint for the “sprint planning” activity; 1 hour for the “retrospective;” and so on for each of the Scrum events.
In contrast with traditional methods, when a team fails to complete the activity within the defined time-box, Scrum calls for the activity to be halted.
We are not supposed to extend the time-box so we can complete the activity comfortably. That would eat into the time available for doing “real work.” This has three positive effects: (1) Teams know how much time they have for “real work,” to they can achieve high predictability in short-term planning; (2) the discomfort encourages people to inspect and adapt to find ways to achieve the goals of a time-boxed activity within the defined time-box the next time; and (3) knowing they must stop the activity when the time-box expires, people become more diligent about showing up and starting on time.
Teams in large, traditional organizations struggle mightily with this concept. They feel terribly uncomfortable when they can’t complete sprint planning or some other activity within the time-box. Their natural tendency is to continue with the work beyond the end of the time-box. This is contrary to the intent of Scrum.
How do we handle schedule slippage?
Traditionally in software development work, we were given a dictated “due date” for our work, and we provided status updates at predefined intervals (weekly, monthly, etc.). When we were unable to complete planned work on schedule, the usual reaction was to extend the schedule.
This caused some stress for managers who had to request additional funding or explain the schedule slippage to customers and other stakeholders, but it was the norm. People didn’t know any other way to handle it.
When a release date had to be missed, the situation caused additional stress and organizational churn. Often, a single feature had to be removed from a planned release. Because each type of task was performed by a different team, this created extra work for the various specialized teams responsible for infrastructure definition, operational run books, end user documentation, application packaging, application testing, application deployment, database definitions, security, networking, and more. Sometimes teams would resort to a “death march” to try and fit all the planned functionality into the release.
With the “agile” approach, we adjust the scope (or the degree of “polish”) of the solution increment so that we can deliver at least some amount of useful functionality to end users. We try to avoid “missing” a date. If there’s too much work to fit into the available time, we deliver the highest-value functionality we can, and collaborate with stakeholders to determine how best to move forward based on reality.
What’s a “user story?”
A crude definition of user story is that it is a kind of lightweight “requirements” document. That definition will not make agilists happy, but I’m trying to use words that traditional software professionals can relate to. “Agile” is full of special buzzwords that have unusual or counterintuitive definitions. “User Story” is definitely one of those special terms.
The people who came up with Extreme Programming (another “agile”-compatible method) referred to a user story as a “placeholder for a conversation.” I called it a “requirements” document, but it really doesn’t include detailed “requirements.”
A user story basically contains four pieces of information: (1) What are we to do? (2) Who wants it? (3) What’s the value to those who want it? (4) How will we be able to tell when we’ve done enough, and we can stop working on it?
To help beginners get started with user stories, someone came up with a formula: “As a [someone], I want [something] so that [value].” Then we add “acceptance criteria” so we’ll know when we can stop working on it.
But that is only a formulaic pattern to help beginners. User stories don’t have to follow that pattern. I’ve seen a lot of teams worry excessively about this.
Something else that worries a lot of teams is the fact that much of their work just doesn’t lend itself to the user story pattern. People have come up with variations, like “job story,” that don’t seem to require an end user to be involved.
A practical suggestion I might make is to ensure the items in your “backlog” at least provide enough information for team members and business stakeholders to have a meaningful discussion about the value, purpose, and scope of the work necessary to realize each idea. It can be uncomfortable for people who are accustomed to detailed requirements to begin work with lightweight statements of user needs and general acceptance criteria, but in fact the technical teams can flesh out the details as they progress, with the benefit of frequent feedback from stakeholders. It’s different from the traditional way, but it can be much more effective, even in large organizations.
That said, it may not be a practical reality for you. In your very large company, by the time your team receives its marching orders, all decisions about software functionality have already been made. Your team may see user stories for the first time during release planning or sprint planning. Someone else has already written them in great detail. You may or may not be asked for feedback. You might be asked for an estimate, but still given a dictated delivery date. It simply may be infeasible for your team to handle “user stories” in the same way as an autonomous team in a small company that has direct access to the key decision-maker for the product. Some teams in large organizations spend a lot of time and emotional energy running around in circles over this question. Just do the best you can to apply “agile” principles in your context, and don’t worry about things you can’t control.
What’s a “sprint?”
“Sprint” is just Scrum’s special word for “iteration,” with the assumption it is a time-boxed iteration.
You might wonder why they call it a “sprint,” given the fact the word implies a short, fast run at full speed with no thought about recovery after crossing the finish line. That seems to be in conflict with fundamental “agile” principles.
It seems that way because it’s true – the plain English meaning of “sprint” does conflict with “agile” principles. Scrum was created years before the Agile Manifesto was published. Originally, part of the idea was that software developers need to be pushed to stay productive. In a 1996 paper presented at the OOPSLA conference, the creators of Scrum described it as a “pressure cooker” in which developers were under pressure to deliver within the time-boxed iteration.
Subsequently, Scrum evolved beyond that Theory X management model and became much more compatible with contemporary notions of humane workplaces and sustainable work. Yet, the word “sprint” remains. Don’t worry about it. It just means “time-boxed iteration.”
What’s a “backlog?”
The product backlog is an ordered list of ideas for enhancing the product to provide value to customers or users. In principle, it isn’t just a list of tasks to be assigned and completed.
When Scrum is used on a single, small team (or small group of teams) in a startup or other small enterprise, the product backlog can be exactly as Scrum’s creators envision.
People experience confusion or friction when Scrum is applied in larger companies, because no single team exists anywhere in the company that really has the ability to explore product ideas as if they were all experiments and all feasible to test against the market.
If you work in a large company, your team is one of several (or many) that contribute to the development of a given product or a related set of products. It’s unlikely you have the option to question anything that has been decided regarding the product you support.
They call your backlog a “backlog” because that’s the official Scrum term. In reality, it’s a to-do list. That’s okay. It’s just a function of the size of your company.
What are “story points?”
Story points were devised as a way for software development teams to think about the relative “size” (effort, complexity, etc.) of each piece of work they plan to do. The idea is to separate the concept of time from the concept of relative size.
Long ago, stakeholders in software projects wanted to know how long it would take to develop a given piece of functionality. It proved difficult for software developers to provide that information, for a variety of reasons including siloed team structure, numerous cross-team dependencies, and unpredictable work schedules with many interruptions, meetings, and unexpected bug reports, as well as the fact that work was normally done by individuals working separately, and one person might take much longer to complete a given task than another.
People started to add a “fudge factor” (sometimes called a load factor) to set aside some percentage of total work hours for all these various interruptions. Then they estimated delivery times based on the time remaining for “real work.” It was an improvement.
But the proportion of total time eaten up by unplanned interruptions was variable, and there was a wide range in these estimates. Someone thought of the idea of using an arbitrary unit of measure that wasn’t connected with time, and comparing the relative sizes of work items using this kind of unit. Some used T-shirt sizes, some used “gummi bears,” and some spoke of “points.” Most people still tended to peg points to time in some way, such as 1 point equals half a day, and yet it was another improvement.
Over time, more and more teams got used to using relative points without thinking about time. This was another improvement, as it enabled them to forecast the amount of work they were likely to complete in a given amount of time with greater accuracy than previous methods.
As teams became more proficient with story-writing techniques, they found they were able to decompose the work into roughly same-sized chunks while still delivering a meaningful solution increment in each iteration. They began to depend less on points and were able just to count stories instead.
At the same time, the idea of maker time vs. manager time became popular. In addition, people became more aware of the impact of context switching overhead on productivity. Teams and their managers organized the day in such a way that teams enjoyed long stretches of uninterrupted “maker time,” punctuated by meetings only at predictable and planned points in the day. Now, at last, it became possible for teams to forecast how long it would take to deliver a given software feature in terms of calendar time. This is what their stakeholders and customers wanted all along.
In large organizations today, very few teams are at that point. Most of them have “stories” or other sorts of work items that vary considerably in size. They also have cross-team dependencies and functional siloes even within their own teams. On the whole, teams in large companies cannot separate the idea of “points” from the idea of “time.” They are functioning at a stage very early in the evolutionary process outlined above.
I would say most teams in large organizations that are in the midst of “agile transformation” initiatives worry too much about “story points.” They have so many other issues that prevent them from forecasting delivery performance that this question is relatively minor.
What’s the purpose of the “daily scrum” or “daily stand-up?”
Most “agile”-compatible software development processes call for team members to confer with one another frequently. Scrum specifies a daily event called the “daily scrum” wherein team members inform each other of where the work stands, identify risks and blockers, and ask for or offer help to team mates. Extreme Programming has the “daily stand-up” and Kanban has the “replenishment meeting.” Other compatible methods call for something similar.
Depending on the type of work a team performs, it may not be necessary to meet daily for this purpose. Some teams do quite well meeting 2 or 3 times per week because their work doesn’t progress as rapidly as software development; for instance, they may be hardware engineering teams.
On the other hand, teams that routinely use highly collaborative work methods may not need a daily meeting at all. As they work together on everything (using “mob programming” or “ensemble work”), they are all aware of the status of every work item, any blockers or dependencies, and any other issues, and they immediately help each other in the moment with no need to request assistance explicitly.
Software development teams in large organizations usually have a “stand-up” every day, but most of them don’t seem to understand the purpose or mechanics of the meeting. Typically, these meetings take the form of status reports. Each team member reports their work to the project manager or Scrum Master (often the same thing, in large companies).
In principle, the “stand-up” is an opportunity for team members to collaborate with each other to help pull work items across the finish line to “done,” to ask for and offer help when a team mate is struggling with an issue, and to identify blockers and other constraints that prevent the team from achieving the sprint goal.
When team members work separately in individual silos, there is often very low engagement in the daily stand-up. No one is particularly interested in what anyone else is doing, as they already have their personal work assignments. However, this is not the way “agile” teams are supposed to operate.
So, how do teams get value from the daily stand-up? There’s an old model that was intended to help beginners get started. It’s called the three questions: What did I do yesterday? What will I do today? Do I have any impediments?
Many teams in large organizations follow this format by rote, as it is often taught in introductory “agile” training sessions. But the format harks back to a time when it was very unusual for software team members to collaborate. Getting them to talk to each other was hard. The three questions forced them to tell each other what they were up to, and whether they needed any help. That’s not really necessary anymore. People are used to talking to each other.
Another format focuses on the work that is nearing completion, rather than on how busy each individual team member was yesterday. Teams stand in front of their physical task board, or they bring up their electronic task board on their screens, and they consider what it would take for them (working collaboratively) to pull the item closest to completion over the finish line. They might discuss the next item that’s close to completion, too. When they’ve discussed the work that’s likely to be done that same day, they stop. They will have another stand-up tomorrow, so there’s no need to talk about every item on the board. The exceptions are risks and blockers, which may require immediate attention.
That’s all there is to it. The stand-up is not the appropriate time for detailed design discussions. Interested team members can break off after the stand-up to discuss those matters. The canonical 15-minute time limit is another artifact from the past; it was meant to keep the meeting short in an era when status meetings often ran for 2 hours. Using the more contemporary format, it’s quite feasible for most teams to finish their daily stand-up in 3 to 5 minutes.
What are “commitment” and “forecast?”
In keeping with the “pressure cooker” concept, early versions of Scrum called for teams to make a commitment during sprint planning to complete the backlog items they selected for the current sprint. This caused a great deal of stress for software development teams for many years.
In 2011, Jeff Sutherland and Ken Schwaber, the keepers of the keys of Scrum and two of its creators, changed the wording in the Scrum Guide to eliminate the word, “commitment.” Today, Scrum talks about using empirical data about the team’s recent delivery performance to make a forecast of the amount of work they are likely to be able to deliver in the next sprint.
In many large organizations today, the official line is that Scrum teams must commit to deliver the backlog items that have been assigned to them. Management uses the language of Scrum to suggest the team “owns” the commitment, but in many cases the teams have no voice in it. Accountability without power.
The disparity between the intent of Scrum and the way it’s often implemented have led to an industry-wide backlash. But in reality the problems people experience with Scrum are rarely due to Scrum as such, but rather to mis-applications of Scrum.
What’s the “sprint goal?”
In principle, each sprint has a goal decided by the team, typically during sprint planning. As Scrum was originally meant for new product development, the sprint goal traditionally focuses on the delivery of a feature or set of related features in the emerging software product.
Today, Scrum is more often used to support ongoing maintenance and enhancement of existing software products. The product backlog often comprises items that aren’t closely related to each other. There may be small tweaks to existing application features, bug fixes, large-scale refactoring that doesn’t fit neatly into regular work, special requests, work items to support other teams that are temporarily overloaded, and so forth.
Many teams have difficulty identifying any single theme for a sprint. Their “goal” then becomes “complete X number of stories.” This is not very inspirational. But take heart! It isn’t your fault. It’s a consequence of the complexity of your work environment, and the fact the application you support is not brand new; you aren’t just cranking out feature after feature.
What’s “collaboration” in the agile sense?
According to Wikipedia, “Collaboration is the process of two or more people or organizations working together to complete a task or achieve a goal.”
That’s a pretty broad definition. Indeed, many novice Scrum teams, in an attempt to appear to be satisfying management’s desire to “be agile,” interpret the word collaboration in any way necessary to achieve a high score on their organization’s “agile assessment.”
In my view, there is a spectrum of collaborative working styles. A team that doesn’t practice any form of collaboration isn’t really a “team” in a meaningful sense. It’s just a list of people who carry out tasks independently of one another, possibly working for the same manager.
When two or more team members actually talk to each other and keep each other informed of what they are working on separately, that’s a minimal form of collaboration. In many large organizations, teams operate in this way and claim they are “collaborative,” and therefore “agile.”
When two or more team members who have the same general area of specialization work directly together on the same task, it represents a slightly higher level of collaboration. Two programmers or two testers or two analysts working together are demonstrating better collaboration than if they worked separately and then informed one another of what they had done.
Slightly closer collaboration: The pairs working within their specialties carry out a collaborative hand-off to the next specialists in line. For instance, when a pair of programmers is ready to hand off to a pair of testers, they all sit down together and discuss the work.
Better still, two team members who have different areas of specialization can work together on the same task at the same time. A programmer and tester, an analyst and tester, and analyst and programmer working together as a pair are demonstrating still better collaboration than if they remained within their intra-team functional silos.
After some time working at the level of cross-disciplinary pairing, team members gradually learn one another’s skill sets. At that point, any two or more team members can address any task the team is facing. This is still a higher level of collaboration.
The highest level of collaboration comes when the entire team works as one on the same task at the same time. This is sometimes called mob programming and sometimes ensemble work.
What is often confusing for novice agilists in large corporations is that they expect “collaboration” to have just one meaning, and they try in vain to learn the “rules” so they can follow them by rote. The bad news – and the good news, too – is that collaboration isn’t that rigid. But in general, the more collaboratively a team works, the better the quality of the product and the shorter the time to market.
What if unplanned work comes up during a sprint?
I think I’ve mentioned already that Scrum was originally intended to support new product development. With that in mind, the Scrum process assumes it’s possible to select a subset of the product backlog to deliver in a sprint, and nothing will interrupt or change that plan. If something else comes up, the standard procedure is to save it until the next sprint. After all, most software teams who use Scrum work in 2-week sprints. Is that really too long to wait for a special request or urgent work item?
Actually, it may well be too long to wait. In large organizations (as already mentioned), teams aren’t usually just building a new product from a predefined backlog of features. They may be doing a bit of that, along with a bit of maintenance of existing applications, bug-fixing, research, refactoring, migration of data or code from one data store or platform to another, and a whole raft of other possible tasks.
Unexpected events such as production support tickets, regulatory changes announced with short notice, special requests from powerful senior executives, and changing business priorities due to shifts in the competitive landscape can raise new issues that must be addressed without delay, even if teams are in the middle of a sprint.
Scrum has a mechanism to deal with this, although originally it was assumed interruptions would be the exception rather than the rule. The Product Owner decides which planned work items can be removed from the sprint to make room for the new, urgent work item. As soon as the team completes the item currently in progress, they can turn their attention to the new, urgent item.
In principle, this should be a seamless and painless process. The team wants to deliver the most valuable work they can. If priorities change, then it only means something else has become the most valuable work.
But two factors common in large organizations can create friction.
First, there is the assumption that whatever was promised for the current sprint must be delivered come hell or high water. This is not the intent of Scrum, but it is a very common perception. The emphasis on the old notion of “sprint commitment” doesn’t help.
Second, there is the fact that nearly all traditional software development teams start all the work on Day 1 of the sprint. Every team member is fully loaded, and everyone is in the middle of something when the new, urgent item comes up. No one is in a logical or comfortable place to pause the work they are doing in order to address the urgent item.
The second factor has a domino effect. Half-finished work has to be set aside…but how and where? Typically, teams create separate branches in version control and stuff the unfinished work into them. In a few weeks’ time, they have accumulated a bunch of branches that contain incomplete code that may or may not be suitable to merge back into the main branch. Some of the work that was previously completed has to be started over.
This leads to “hangover” – backlog items that were not completed in the same sprint when they were started. Unless this is remedied, more and more unfinished work piles up in the backlog, and the team has less and less time for “real work” as time goes on. This only creates stress for team members and product stakeholders. It often leads to rushing and cutting corners in the work, which in turn results in more defects, which leads to proportionally more time fixing bugs and less time adding features that customers want.
Teams are in the best position to handle unexpected, urgent work items when they (1) limit work in process by working collaboratively; (2) really, truly finish a work item before closing it and taking up a new item; and (3) leave about 30% of capacity available when they are planning the sprint, for flexibility.
Is there such a thing as an “agile” software factory?
The idea of the software factory became popular for a while back in the previous millennium, when people assumed producing software was basically the same as producing widgets on an assembly line. In some larger companies, the idea persists to the present day.
The idea directly conflicts with “agile.” Yet, many larger organizations explicitly want to set up a software factory and use “agile” methods to do it. How can that work?
It can work because we can accept the fact “agile” in a large organization won’t be identical to “agile” in an ideal situation. Already mentioned: The product backlog is basically a to-do list by the time any individual team sees it, and release dates are often predetermined and immutable, to align with global marketing campaigns and the expectations of millions of customers. The backlog isn’t a list of possible market experiments from which the team is free to choose.
There are several aspects to “agile” software delivery, and it’s feasible to apply some of them without applying all of them. The experimental aspect of “pure agile” development can be adapted to the needs of an organization that has a long product pipeline with many expensive associated activities besides just the software development piece. We can still take advantage of the power of collaboration, whole-team ownership of the work, iterative development, incremental delivery, automation, the concept of sustainable pace, humane working conditions, and lean-based process improvement.
A 20th-century software factory isn’t really possible at all, no matter what sort of development process we use. By the same token, a purely “agile” approach may not be possible for all large organizations, either. We can bring the two ideas together to meet in the middle at some rational and practical point.
Where can we find credible information?
Misunderstandings of basic “agile” concepts are very common in large organizations carrying out transformation initiatives. Given that “agile” is hardly new and there’s ample documentation of it, why is that the case?
My observation is that most large organizations have a case of NIH – Not Invented Here. Each company wants to define its own “agile” process. In addition (and for me this is singularly strange), most of the long-term employees of large companies rarely, if ever, look beyond the four walls of their employer to find information about anything whatsoever, be it “agile” or anything else related to software development and operations. Their understanding of “agile” is limited to whatever the internal trainers came up with, and they usually have little or no industry experience applying “agile.”
A second cause of confusion is the fact there are countless information sources that conflict with one another. If people do go out to the Internet to search for information about “agile” or Scrum or related topics, they will find all sorts of overlapping and contradictory statements. Lacking previous experience of their own, they have little chance of sorting through all that stuff.
If you’re implementing Scrum (and who isn’t these days?), the only official guide is the Scrum Guide maintained by Scrum’s creators, Jeff Sutherland and Ken Schwaber. Everything else you might find is an interpretation, an extrapolation, a variation, or a critique of Scrum.
How can we measure progress?
Most large organizations track progress with their “agile” initiative by comparing the way each team works with the canonical Scrum events. Does the team have a daily stand-up every day? Check. Does the team have a retrospective every sprint? Check. Does the team have a sprint review every sprint? Check. Does the team perform backlog refinement every sprint? Check. And so on. The problem is this doesn’t tell us much about whether the team is achieving the business goals of the transformation.
In my opinion, metrics for tracking progress in process improvement must not have any dependencies on the process itself, because the way we will improve the process is by changing it. Earned Value Management (EVM) works for a linear process because we identify all the work packages and their costs up front. As we move toward an “agile” approach, EVM breaks down and we can’t compare delivery performance before and after. Similarly, popular “agile” metrics like Velocity only work when the time-boxed iterative process model is applied correctly; Velocity is meaningless during the period when teams are trying to move from their familiar linear process toward Scrum because they aren’t yet delivering anything in every sprint. By definition, then, they have no Velocity.
Metrics from the Lean school of thought tend to be more useful for tracking improvement than the metrics we might use to track delivery performance within a specific process model. That’s because Lean metrics aren’t based on how the work is done; they only measure results. These are measures such as Lead Time, Cycle Time, and Process Cycle Efficiency.
On the technical side, improvements in operations (production stability, mean time to fix, etc.) can be captured using observability-based monitoring tools and/or log aggregation tools. Improvements in delivery can be seen by tracking escaped defects and feature lead time. It isn’t necessary to track a gazillion different metrics; just the right ones for your situation and goals.
I’ve seen all these misconceptions about “agile” and Scrum in large organizations that were trying to use “agile” to improve software delivery and support. Some of them may apply to your own organization. If so, I hope at least some of this has been helpful.