There’s no shortage of advice about best practices. Naturally, commercial enterprises like IBM and Construx have advice. Their advice is designed to lead people toward their products and services. Of course, that’s because they believe their products and services support best practices. After all, they designed their products and services on the basis of what they consider to be best practices. So, it’s all good, right? They’re financially successful, so we can trust them. Besides, everyone knows that commercially-packaged best practices are carefully designed and well proven.
Individual practitioners are also eager to share the practices they have found "best" in their work. Their advice can be helpful, provided we remember it tends to be opinionated ("Git:Mercurial = Assembler:Java" — not "practices," BTW); haphazard and idiosyncratic (not to mention sexist and poorly written); specific to a single technology (and poorly written); or a rehash of generalizations (supported by questionable references). Like this Wikipedia article (as of 24 July 2013), such advice "has multiple issues."
So, what does "best" mean? There are various opinions. Urs Gattiker suggests "Best Practice is a superior method or innovative practice that contributes to the improved performance of an organization, usually recognized as ‘best’ by other peer organizations," while "Good Practice means to carry out a function or testing using only recommended or approved methods (e.g., food regulation)." That sounds pretty good, but Rob England cautions that "’Best practice’ does NOT mean state-of-the-art: there is no way to know yet whether unproven practice is ‘best’ or a blind alley or an awful destructive idea." Scott Ambler suggests a practice can only be ‘best’ or ‘good’ in context; he suggests we think in terms of contextual practices. Many others worry that people will latch onto a "best practice" and stop thinking about continual improvement. Today’s best practice is tomorrow’s yesterday’s news.
My rule of thumb for using any sort of tool effectively is (1) choose a tool that is appropriate to the task at hand, and (2) learn to use the tool well enough that you won’t hurt yourself (much). Software development practices are tools. Understanding the context is critical for choosing appropriate practices, and I think Ambler offers a balanced voice amidst the white noise of general discussion about practices.
But understanding the context and recognizing which practices are appropriate are easier said than done. In a post about one particular software development practice (pair programming), I mentioned a problem that seems widespread in our field: Binary thinking. People seem to expect any given development practice to be universally "good" or "best" or "bad" or "worst." The irony is that people who apply logic to their work every minute of every day seem unable to apply logic to the question of good practices. Some are quite vocal in their expression of binary thinking, like Ben Northrop and Jon Evans. I can certainly imagine that pairing with them would be just as unpleasant as they say it would be. Others try to be objective, but reveal their biases nonetheless, like Jeff Atwood, who grudgingly acknowledges that some people (not him) might get some value from pair programming, and C. Mountford, who grudgingly acknowledges that some people (not him) might not get value from it. All of that just comes back around to opinion in the end.
I don’t mean to harp on pair programming in particular. This applies to any and all software development practices. So, are there any "best" practices? What about "good" practices? The word "best" is problematic. It can be understood to mean "best that could ever be" or to mean "best currently known." I prefer the latter definition.
I’m more interested in outcomes than in practices; that is, I’m interested in the effectiveness with which the team delivers value more than in the techniques they apply along the way. I have opinions about all the various software development practices, just like everyone else has. I’ve also had the opportunity to observe many teams in action, and to coach teams in improving their effectiveness. One observation I’ve found useful is this: Teams (and individuals) perform best when they are using methods they understand well and that they have chosen for themselves. The outcomes I have observed tend to fall into four levels of performance. From best to worst they are:
- Team (or individual) uses "best currently known" practices because they understand the practices and have chosen the practices for themselves.
- Team (or individual) uses good practices that may not be "best currently known" but are the best they happen to know at the moment, and they have chosen the practices for themselves.
- Team (or individual) uses whatever methods and practices they are told to use, and they don’t particularly care one way or the other about debating the matter.
- Team (or individual) is being coerced into using methods and practices they do not wish to use.
Point #4 is critical, in my view. If a programmer does not "believe in" a practice, such as test-driven development, for instance, and management forces him/her to test-drive code, the results will (miraculously?) be poor. Later, he/she will point to the results as evidence that TDD "doesn’t work." Conversely, if a programmer swears by TDD and considers it the best or only way to deliver good software, and management or the team lead requires him/her not to use the practice, the results will (miraculously?) be poor. Later, he/she will point to the results as evidence that TDD is a critical success factor. The binary thinkers will selectively ignore one case or the other and conclude TDD categorically "works" or "doesn’t work," according to their individual beliefs. The same pattern applies to any other practice.
FWIW my advice is:
- "It" can neither succeed nor fail. Only people can do those things. "It" is just a tool. People have to learn how to use tools properly. Don’t kid yourself: The first time you use an unfamiliar tool, you will not use it well. That will be true even if you are very, very smart. Take responsibility for your own results. It’s the poor craftsman who blames his tools.
- Don’t look for best practices. Instead, find out what the effects of each practice actually are. Then determine whether you want those effects in your work. For example, if a practice is touted to help reduce defects, and you would like fewer defects in your code, then it might be worth your while to try it out.
- Adopt a mindset of continual improvement. Keep an open mind about unfamiliar practices. You don’t know everything, even if you have 20 years experience (or 6 months experience repeated 40 times). If you’re doing things the same way today as you did last year on this date, then you haven’t learned anything in the past year.
- If some people extol a practice while others vilify it, find out what actually happened in each case so you will be equipped to make an informed judgment. The reasons for their results are unlikely to be as simple as "it works" or "it doesn’t work."
- It’s quicker to try things for yourself than to learn about effective practices by reading studies. Studies are general; experimentation in context will provide practical insights you can use.
- Nothing prevents you from trying a practice you want to check out. You’re a professional; control your own work.
- Nothing prevents you from discontinuing a practice you’ve decided you don’t want to use. You’re a professional; control your own work.
- Be happy. You don’t have to be hyper-productive. Your company probably wouldn’t be able to keep up with you if you were, anyway. If you are satisfied with the way you are presently working, there is no problem to solve.
Out of curiosity, where did you get the idea that I was strongly pro- or anti- anything?