Posted on

What’s our focus for improvement?

Here’s a story.

A development team was thrashing as they attempted to use Scrum to help them build a new application. They had numerous defects, some of which escaped to production and others that were caught within each Sprint. Many defects they thought they had fixed popped up again in production.

Their stakeholders had high expectations, as the team’s velocity was high. After several Sprints, it became clear they were not going to deliver all the planned features by the defined deadline. The news came as a surprise to stakeholders.

The Scrum Master noted the team’s velocity was artificially high because they were assigning one story point to each known defect during Sprint Planning. They were scoring velocity points for fixing defects, so not all the delivered points represented real progress on planned features. They might have, say, five points’ worth of feature development and 15 points’ worth of defect fixes in a given Sprint.

The team’s corrective action was to collect all the known defects into a single user story during Sprint Planning, and assign that story one point. As a result, their reported velocity more accurately reflected their progress on planned features. Stakeholders were able to get earlier warning when the project was getting behind schedule.

This was an improvement, in a sense. But the team continued to produce defects. Besides that, often a defect they believed they had fixed occurred again in production. Sometimes, that pattern repeated three or four times.

What was the team focusing on for improvement? They were not thinking about their delivery rate, software quality, personal level of stress, and other factors directly related to effective delivery. They were thinking about how their velocity chart looked. That’s all.

Observations

What do you glean from the story? I may be mistaken, and I may have overlooked something, but here’s what I understood from it:

  1. Scrum overlaid atop a linear project plan.

    Indicators: Team frequently mentions things like hard due dates, falling behind schedule, missing delivery commitments. They have no Sprint goal, but are assigned specific stories they are expected to deliver. Estimates are treated as promises. Estimates are based on clock time – not on relative sizing or ideal time. (They say the word “points,” but the estimates are actually based on clock time.) They do not use their own demonstrated delivery performance to forecast realistic performance for short-term planning. Stories are wired into specific Sprints well into the future. Their Product Backlog is a Work Breakdown Structure decorated with Scrum buzzwords.

  2. Poor stakeholder engagement.

    Indicators: Team has difficulty completing User Acceptance Testing (UAT) within each Sprint because they cannot get (internal) customers to attend Sprint Reviews or otherwise look at the results of a Sprint to give feedback and/or approval. The company’s Agile initiative defines a Sprint as two weeks in length to correspond with an overall two-week pulse or cadence of work across the enterprise. Stakeholders of software teams are expected to work on the same two-week cadence. However, most continue to work as they did before, engaging with software teams only when they have spare time, and preferring to wait until the finished application is delivered in full before taking an interest in it.

  3. Wrong focus for improvement.

    Indicators: All discussion about improvement and all changes aimed at improvement are focused on influencing the metrics that the team’s management pays attention to. There is no discussion about how to improve flow, prevent defects, reduce team member stress, achieve reliable forecasting, improve code quality, or any other factors directly relevant to delivery effectiveness.

  4. Possible misunderstanding of value-add vs. non-value-add activity.

    Indicators: Team members spend a lot of time conducting Scrum events (or meetings of some kind), talking about Scrum overhead activities, and performing adminstrative tasks in the project management tool. They do not time-box their Scrum events. They do not time-box spikes. They do not really understand why some backlog items are estimated and others are not; all work is the same to them. They are not aware of how much time they spend in overhead activities. They do not use Scrum’s time-box mechanism to limit non-value-add time, nor do they appear to understand it in principle.

  5. No concept of throughput vs. resource utilization.

    Indicators: Team starts every User Story on the first day of a Sprint. The stories remain in In Progress status until the last day of the Sprint. The level of WIP is equal to the number of tasks in play. The focus of the Daily Scrum is on activity rather than completion – I worked on this yesterday, working on that today, will work on the other thing tomorrow. No self-directed urgency to finish anything is evident.

  6. No follow-through, no double-loop learning.

    Indicators: Team seems to accept the pattern of recurring defects as normal. This suggests they do not adequately check the results of fixing a defect to ensure it doesn’t cause new problems, and they do not fold whatever they learn from fixing a defect back into their development and testing activities going forward.

  7. Possible misunderstanding of Scrum mechanics.

    Indicators: Sprints are only marks on a calendar in which specified deliverables must be done. Scrum events are not time-boxed and do not have distinct objectives. Every event is a generic meeting in which anything and everything may be discussed at an arbitrary level of detail. Every meeting starts late and runs over time. Retrospectives are used to celebrate stories that were completed and to lament those that were not completed. No other topics or issues are raised. No follow-up action is ever decided. Items from previous Retrospectives are never revisited.

  8. Working in functional silos.

    Indicators: Team says they discover defects within the Sprint, which typically means team members are working within functional silos like programming (they call it development) and testing, and work is handed off between the silos, back and forth. To track this, teams formally log a defect in a tracking tool for each of these hand-offs. By definition in this organization’s Agile program, a defect is an escaped defect; until a team declares a story done, it’s work in progress – incomplete, maybe, but not defective in the same sense as a production bug. So, this team is not using the Agile process defined by its own management; probably driven by internal functional silos. Team schedules a regression testing Sprint prior to each release. They do not appear to consider the possibility of pulling regression testing into the routine work of a normal Sprint. During the regression testing Sprint, developers unofficially try to catch up with work they didn’t have time to finish on the normal schedule. They often work overtime. These are typical symptoms of internal functional silos.

Possible actions

That’s my attempt to distill some possible underlying problems from the observable behaviors of the team and its internal business partners. You might see it differently, or have additional observations. I would be interested to hear them.

It seems to me issues #1 and #2 are organizational issues beyond the scope of the single team. Possible corrective action could include management coaching for the business unit that engages the team to support its product and for the internal users of the product who are disengaged from the process. These groups need to understand the fundamentals of iterative and incremental development as well as the expectations of them per their company’s organizational changes, particularly the enterprise-wide two-week cadence for work.

Issue #3 may also be an effect of organizational culture. There is probably a reason why team members are deeply worried about how their metrics look to management.

Issues #3, #4, #5, and #6 relate to the team’s understanding of fundamental concepts that are key to achieving improved delivery performance, regardless of buzzwords like Agile and Scrum. Possible corrective action could include training and coaching at the team level.

Issue #7 seems to be general misunderstanding of the Scrum framework, both the why and the how. The team has received basic training in Scrum, but for effective application of Scrum, ongoing team-level coaching is required for some period of time. Based on my observations of the team, I would expect three months of day-to-day coaching would be sufficient to launch the team on a good path.

A sort of “meta-issue” related to #6 is management’s assumption that by exposing people to unfamiliar concepts through a brief introductory training session, they will immediately internalize the concepts and be able to apply them in their regular work, in their existing real-world context. The problem is exacerbated by the scale and speed of the Agile initiative and the number of coaches available. This is the reason Scrum is widely misunderstood at the team level across the organization.

Issue #8 stems from a misunderstanding of the value and the methods for close collaboration. There is a focus on individual work, specialization, and formality. Possible corrective action could include team-level coaching to encourage collaborative work and the cultivation of multiple skill sets by individual team members.

Have I misunderstood or overlooked something, in your view?