Posted on

Refactor Anyway

Have you heard or read statements like the following?

  • “Tests are the wall at your back. You gotta have tests or you don’t know what you’re doing.” ( Sandi Metz video; time index 9:35)
  • “Not using an IDE with refactor tools like the ones discussed above is a waste of time.” ( Brian Ambielli)

I’ve seen a lot of people paralyzed by this advice. But why? It’s good advice, after all.

I think the problem is advice like this assumes the listener has a certain understanding of what software development work entails, and an ability to synthesize information and apply new techniques in context and with the benefit of substantial experience that includes particular activities and skills.

Absent those conditions, advice like this can scare people. They assume they literally cannot or must not attempt to refactor code unless the code is already well covered by a comprehensive and meaningful suite of executable checks, and they have the privilege of using very specific tools that verify the refactoring is completed safely.
Continue reading Refactor Anyway

Posted on

Remote Mob Programming: Lessons from This Week

A lot of people are doing remote Mob Programming sessions now that most of us are working from home. I participated in a few and hosted one in the past week. It’s been a good learning experience. I feel as if we (as an industry or community) are rapidly learning how to make this approach work for regular work, virus or no virus.
Continue reading Remote Mob Programming: Lessons from This Week

Posted on

Handling Unplanned Work (Webinar)

Most software development teams today are using a process based on the idea of iterative development and incremental delivery.

Exactly how long the iterations are and how big the increments are may vary. Whether the iterations are treated as time-boxes or are used to establish a cadence may vary, too. But in general, teams are no longer using an end-to-end linear process for software development and delivery.

The details of various process models and frameworks differ, but the overall shape of the process is that teams carry out planning in waves or stages, zeroing in on near-term work close to the time they will start on it. Then they plan to bite off a reasonable chunk of the planned work and deliver it within a certain time frame.

We want to get in front of risks and external dependencies so that we will not stumble over them at the last moment, but on the whole we defer detailed planning and design until close to the time we will implement the software. That way, we avoid investing a lot of time in details that are subject to change.

We all know the mantra, “Plans are useless but planning is essential.” We all know that our plans are a best guess at a point in time, and nothing more. And yet, we all feel stressed when our plans are disrupted.

We know that unexpected things happen in the normal course of work. Priorities change. Urgent requests come in. Production issues occur.

And the processes we use – Scrum, Kanban, “Scrumban”, or some customized variation of these – include mechanisms to handle unexpected work gracefully.

Despite all these things, we still tend to react to unplanned work as if it were an emergency.

Is everything really an emergency? And what should we do when the unplanned work pushes our planned work aside?

In the context of software development, what’s an emergency, really? Is every unexpected occurrence a genuine emergency? How can we tell the difference in the heat of the moment?

Is it realistic to think a development team can take on more work than their capacity, just because the new work is deemed more important than other work in progress?

Is it sensible to gloss over software engineering principles, testing, refactoring, and general attention to detail in order to push code to production in a rush?

What are the mechanisms built into processes like Scrum and Kanban to handle unexpected, urgent work that wasn’t planned?

Why do people react as if they are required to complete everything that was planned in addition to every urgent, unplanned item?

What are practical ways to respond when an unplanned item comes up, that ensure we are delivering the maximum possible business value – with emphasis on the word possible?

Join us in the next First Monday Webinar to explore these questions and to consider the unique problems you are facing in your own organiztion. Two sessions with the same content are scheduled:

There’s limited participation, so that we’ll have time and capacity to explore your real-world situation. See you online!

Posted on

Code Katas

A code kata is a formulaic or choreographed way to practice programming skills. Many katas can be done in different ways to achieve different learning goals. Others are designed to help us internalize a sequence of steps to address common programming problems we encounter in our work.

In 2012, writing on an blog I used to maintain, I attempted to explain why we practice code katas. Based on observations of coding dojos and similar meetups in the years since then, it seems to me people still don’t understand the purpose of a code kata, or how to use katas to build skills.

When I searched for an explanation of how katas are used in martial arts, I discovered a very good explanation by martial artist Martin Jutras, on the Karate Lifestyle site. One thing that makes his explanation particularly relevant is that he addresses common misconceptions about katas. I’d like to quote from that site now, and correlate Jutras’ observations about martial artists with my observations about programmers, in the hope of clarifying the role of code katas in our professional development.
Continue reading Code Katas

Posted on

It’s time to re-group and prepare for the upturn

In December 2008, I attended a talk by Niel Nickolaisen about his Purpose Alignment Model, or PAM. I found it compelling and useful, and I’ve applied it with many clients since then.

I might describe it crudely as a typical consultant-style “quadrant diagram” showing mission criticality or alignment with business purpose on the X axis, and market differentiation on the Y axis. There’s a walkthrough of a hypothetical bank business in my book, Technical Coaching for IT Organizational Transformation, and similar descriptions on Todd Little’s blog, on Cory Foy’s blog, on KBP Media’s site, and many other sources you can find easily online. Some people call it the Purpose-Based Alignment Model, instead of Purpose Alignment Model; bear in mind those names refer to the same model, which was originally called the Purpose Alignment Model or the Silly Little Model, after co-creator Todd Little.

One of the ways the model can be used is to think about how to position or re-position a company during economic downturns. Companies that use downturns to prepare for the next upturn are positioned to hit the ground running when the economy ramps back up again, as it always does. Companies that react to downturns in fear, by cancelling initiatives and halting capital spending, tend to lag behind the competition when the economy emerges from the tunnel. Their behavior during downturns is a telltale that they can only prosper by following those with better leadership, feeding on the excess the successful companies produce during the good times.

In the talk, Niel had just finished saying more-or-less the same thing when a participant in the session asked him what his company was doing just then. (You may remember Q4 2008 was the start of the “financial crisis” of the early 2000s.) He replied that they were cancelling all initiatives and halting capital spending. They intended to wait and see what other companies did, as a signal to know when it would become safe to operate normally again. I was surprised at the answer.

In the year 2020, we’re in another economic downturn. Most companies are handling the situation by cancelling all initiatives and halting capital spending.

Now is the time to prepare to hit the ground running when the economy emerges from this tunnel. The companies that will succeed when the economy rebounds are already investing in building the capabilities and skills of their staff (through training and mentoring), and preparing their work environments for the post-pandemic world – physical office setups that minimize the risk of infection (changing open-plan work spaces, reducing the use of conference rooms), adjustments to work schedules to avoid forcing people to crowd together unnecessarily (such as waiting 30 minutes at a bank of elevators to go to their floor in the building, because employees must all arrive at the same time), changing some common operating procedures (such as requiring employees to crowd together to listen to executive announcements or “town hall” meetings, when they could listen just as well over the computer network), and intentional support for remote or distributed work.

Consider investing in training for your staff. Take advantage of the free training offers that have proliferated, as training companies struggle to remain visible in this slow market. But don’t expect training to remain “free” forever. There is value in good training, and value doesn’t fall from the sky like rain…at least, not usually. Even in these times, the highest-quality and most practical training for technical staff isn’t offered free of charge, with good reason. You get what you pay for.

Posted on

COBOL is not the problem

In the midst of the Coronavirus pandemic, there’s a lot of buzz on social media about a sudden need for COBOL programmers to help US State government agencies cope with the problem. IBM is even offering free COBOL training courses through a partner that specializes in mainframe-focused technical training.

But what’s the problem, really?

Domino #1: Coronavirus, knocks down domino #2: Constrained mobility, which knocks down domino #3: Reduced demand for workers, which knocks down domino #4: Increased unemployment rate, which knocks down domino #5: Increased demand for State services, which bangs its head against wall #1: Inadequate computing capacity to handle the increased workload.

Many of the State government computer systems are written in COBOL. Therefore (the reasoning goes), there’s a desperate need for more COBOL programmers.

Let’s pause for a moment and take a deep breath (through our masks, of course).
Continue reading COBOL is not the problem

Posted on

Effective Distributed Collaborative Work

Everyone’s talking about remote or distributed work these days. Here’s another opinion, for what it’s worth.

Steve Glaveski describes Matt Mullenweg’s concept of five levels of distributed work in a March 29 article on Medium. You can read it for yourself. In fact, you should do so, as I’m not going to summarize it. The rest of this article assumes you’re familiar with it.

Just now I’d like to make an observation about Level 4, Asynchronous Communication, and the implications for software development teams in particular.

On the path from dealing with distributed work in an ad hoc way (Level 1, Non-Deliberate Action) toward the goal state (Level 5, Nirvana), “where your distributed team works better than any in-person team ever could,” the ability for individuals to work effectively without constantly interrupting each other is consistently equated with “better.” As people build skills in doing this, and the organizational culture adapts to it, performance improves.

That’s probably true for many kinds of office work, but not for all kinds.
Continue reading Effective Distributed Collaborative Work

Posted on

The Curious Incident of the Magnetic Scotty Dogs in the Night-Time

Long, long ago, in February of 2020, Joshua Kerievsky of Industrial Logic fame published a blog post entitled, “A Tale of Two TDDers.” In it, he described a production issue which he said was based on real-world experiences in a real code base. He described two team mates, David and Sally, who took very different approaches to solving the problem.

Although based on a real incident, the story in this blog post is not the actual tale; it’s a contrived version in which David and Sally separately solve the same problem two different ways. David used a test-driven approach and took several hours to refactor a bunch of related code. Sally pushed a quick fix into production and the team immediately added the missing test case(s). Of course, in real life the team solved the problem just once. But Josh wanted to set up a question for readers: “Which programmer would you prefer on your team?”

The post generated quite a bit of enthusiastic discussion. So much, and so enthusiastic, that just 6 days later Josh posted a follow-up post, “One Defect, Two Fixes” to try and clarify. He mentioned the first post had “generated a lot of interesting discussion, some of which bordered on deep misunderstanding.”

Continue reading The Curious Incident of the Magnetic Scotty Dogs in the Night-Time

Posted on

Against TDD

Test-Driven Development (TDD) is a tool. To get value from a tool, it’s necessary to:

  1. choose the right tool for the job; and
  2. use the tool properly.

Circa 2019, there are numerous debates about whether TDD is useful. It seems to me many of the arguments against TDD boil down to one of the following:

  • trying to use TDD in situations where it is not the right tool for the job; or
  • using TDD improperly.

Choosing the wrong tool or using a tool improperly are certainly things that we fallible humans do from time to time. However, when people have experienced those things with respect to TDD, that isn’t what they say. Instead, they say TDD categorically does not help. It is inconceivable that they could have made a mistake. The tool they used must have been at fault. They are here to warn you against being harmed by that tool.

Continue reading Against TDD

Posted on

Reducing process overhead in Scrum

In a social media discussion in mid-2019, several people expressed surprise at the idea that Scrum might include “overhead.” The confusion seemed genuine. Some people asked for examples. It seemed they were unable to conceive of “overhead” in Scrum.

The popularity of Scrum has led to an interesting situation in the Agile community. Many people view Scrum as The Answer. It’s the only and best way. There is no possibility to improve beyond Scrum. Everything in Scrum is valuable by definition.

In reality, every process includes overhead. We do things for customers/users, and we also do things to position ourselves to do things for customers/users. When we don’t distinguish between the two, we can fall into the trap of trying to perfect our overhead activities, rather than minimize them.

Continue reading Reducing process overhead in Scrum