Posted on 1 Comment

Utilization thinking vs. throughput thinking

In helping organizations achieve their goals for process improvement, I have found the single most prevalent conceptual barrier to be the notion of throughput as opposed to resource utilization. Many, and possibly most organizations hinder their own process improvement efforts when they try to maximize individual resource utilization rather than trying to maximize throughput. Once they are able to move beyond utilization thinking, many other challenges fall away naturally.

Utilization thinking

In the era of the Industrial Revolution, many manufacturing operations became mechanized. Machines were expensive, and factory owners wanted to be sure they got their money’s worth from each one. Mechanized manufacturing was something that had never existed previously, and the manufacturing industry had no collective experience on which to base a clear understanding of how to achieve the highest return on investment using this sort of operation. A naive assessment of a mechanized manufacturing operation easily leads to the conclusion that the way to get the most value for the investment in the machines is to run each unit at its maximum capacity all the time. That was the way people thought about it in those days, and many people still think that way today. This mentality is called utilization thinking.

From the point of view of plant operations, utilization thinking leads to local optima at the expense of the global optimum. When we assess the operation of each machine or each step along the production line, we find that each resource is operating at its own peak capacity; it is fully utilized at all times. Based on utilization thinking, this is all good.

However, inventories of unfinished product tend to accumulate between the production steps, since not every machine on the line has exactly the same capacity or can operate uninterrupted for an indefinite time. Production pauses locally for changes in machine set-up, to switch from making one component to another. Machines with higher capacity generate unfinished goods that cannot be consumed at the same rate by machines downstream on the line. Eventually the factory floor becomes clogged with unfinished goods, and the factory manager must resort to transporting inventory between the production floor and warehouses. All this activity reduces the efficiency of the production line as a whole.

Meanwhile, when a downstream machine has higher capacity than an upstream machine, it cannot be operated at full capacity all the time. Rather than worrying about end-to-end production, plant managers often worry about the fact a particular machine is not running at full capacity, and they solve the wrong problem. An online guide to implementing the Theory of Constraints, written by Dr K.J. Youngman, provides a concise and useful overview of various approaches to shop floor scheduling, including a good description of this problem.

Resources and humans

To make matters worse, as of the turn of the 20th century human beings were considered to be equivalent to machines, or perhaps more accurately, interchangeable machine parts, with the factory itself as the machine. Building on this assumption, the notion of human beings as “resources” became entrenched in mainstream management thinking very early on in the Industrial Age, and was taken for granted by the beginning of the Information Age.

This attitude toward workers was expressed powerfully in the classic 1906 Upton Sinclair novel, The Jungle, and was formalized and codified through the work of Frederick Taylor, most famously with the publication of Principles of Scientific Management in 1911. The following quote from Taylor, cited by David Montgomery in his 1988 book, The Fall of the House of Labor: The Workplace, the State, and American Labor Activism, 1865-1925, illustrates the key underlying assumption of Scientific Management:

“I can say, without the slightest hesitation,” Taylor told a congressional committee, “that the science of handling pig-iron is so great that the man who is … physically able to handle pig-iron and is sufficiently phlegmatic and stupid to choose this for his occupation is rarely able to comprehend the science of handling pig-iron.”

Montgomery also notes:

With the triumph of scientific management, unions would have nothing left to do, and they would have been cleansed of their most evil feature: the restriction of output.

So, the fact that humans did not actually function as machine parts was taken by management not as a symptom of basic reality, but rather as evidence that workers desired to “restrict output” because they were inherently “evil,” not to mention “phlegmatic and stupid.” Thus Taylor was able to write, without irony, in Principles of Scientific Management,

It is only through enforced standardization of methods, enforced adoption of the best implements and working conditions, and enforced cooperation that this faster work can be assured. And the duty of enforcing the adoption of standards and enforcing this cooperation rests with management alone.

From the 1970s through the 1990s, most companies treated information systems workers (and other employees, too) as if they had to be forced to cooperate with management’s goals by forcing them to comply with standard methods. Humans were measured on the same basis as resources (e.g., chairs, telephones, or staplers), with the assumption than any suitably trained human would function identically in a given role as any other suitably trained human, just as any chair will serve as well as any other.

Like machines, the worth of a human was measured in terms of productivity — the quantity of output per unit time. Also like machines, the assumption was that the cost of a human was his/her salary plus benefits and facilities overhead, and that the best way to recoup that cost was to keep him/her running at maximum capacity at all times. This tended to create a local optimization problem analogous to that experienced in physical manufacturing plants. It is also the basis of the preoccupation with tracking individual workers’ hours against pre-assigned tasks, rather than allowing people to use their own brains to achieve the common goals of the company.

Throughput

Eliayu Goldratt elaborated the Theory of Constraints over the course of a number of years. The evolution of the idea can be traced through the publication of three key books: The Goal (1984), Critical Chain (1997), and Theory of Constraints (1999). The constraint in Theory of Constraints is anything that hinders a system from achieving its goals. The basic idea is that any system is limited by just a few constraints, and that there is always at least one constraint. Based on the general idea of the “weakest link in a chain,” ToC defines an iterative process improvement technique known as the Five Focusing Steps, which aims to address the main constraint in a process, whatever it may be at any given time.

When the system in question is a for-profit business enterprise, then the goal is to earn a profit. Profit is earned by providing a product or service that a customer deems valuable, and will pay for. The system generates value units, which are units of currency in a for-profit enterprise, and may be some other unit of measure for non-profit enterprises and government agencies. The rate of production of value units is called throughput.

The Theory of Constraints represents an evolutionary progression of thought and application that has several lines of ancestry. Probably the main line of descent is the production system developed at Toyota in Japan. Company founders Sakichi and Kiichiro Toyoda, along with engineer Taiichi Ohno, developed an approach they called just-in-time production in the late 1940s. Their ideas received a significant boost in 1950 when the American consultant W. Edwards Deming began to work with business leaders in Japan. Building on those foundations, Ohno, along with Shigeo Shingo and Eiji Toyoda, developed the Toyota Production System (TPS), which reached its final form around 1975. TPS became the basis of Lean Manufacturing, currently the most popular trend in the manufacturing sector.

In adapting these ideas to the field of software development, we usually find ourselves working with only a subset of the overall end-to-end delivery process of a company. We are concerned with the portion of the process that involves creating software. Thus, the value unit is some unit of measure of delivered software, and throughput is the rate at which the software development process delivers those units.

The TPS and Lean Manufacturing, as well as the broader umbrella term, Lean Thinking, combine both the mechanics of a process and the human element. It differs from Scientific Management in its assumptions in both those areas. On the mechanical side of things, Lean Thinking focuses on global optimization of the end-to-end delivery process, rather than on the utilization of individual resources. On the human side of things, Lean Thinking explicitly calls for respect for people and for teamwork. Rather than assuming workers want to “restrict output” because they are “evil,” the assumption is that workers have the same business-related goals as the company owners — to build the success of the company by maximizing throughput. Thus, with Lean Thinking we have both a focus on efficient mechanics, and practical leverage of people’s innate desire to do good work, add value, and produce useful things.

Beyond the conceptual barrier

When people apply utilization thinking, they tend to overlook the problems that lead to low throughput because they are focused on individual resource utilization, on limiting employees’ activities to those allowed in their formally-defined job roles, and on tracking the time each employee spends on externally-assigned tasks. People feel as if they must remain visibly busy at all times, even when that means working on things that do not contribute to throughput. People feel they are not allowed to do anything outside the boundaries of their defined roles, and that there is no place in the organization for their innate creativity.

When people apply throughput thinking, they tend to see the problems that reduce delivery effectiveness because they are mindfully looking for those problems. Many of the details of Lean Thinking become obvious to them, and no longer require lengthy explanation. When a person sees a problem, whatever his/her role or position on the organization chart, he/she feels empowered to solve it, and to involve whomever else in the organization has the skills and tools to help solve it.

Throughput thinking helps people see time in a different way. They begin to look for points in the process where work flow is delayed, blocked, uneven, or when the same piece of work has to be re-done. These things become much more important than worrying about whether John Smith spent exactly 6.25 hours on task #125.46(a) last Tuesday. They shift from Cost Accounting to Throughput Accounting, which gives them a much better idea of how well they are achieving the true goals of the enterprise. They start to rely on metrics that track the outcomes achieved by the process rather than how busy individual resources are; metrics like throughput, cycle time, lead time, and process cycle efficiency. They begin to take advantage of the rarely-tapped creativity of their co-workers and the power of genuine teamwork, rather than locking people into narrowly-defined functional silos and trying to march them in lock-step along a static path.

Utilization thinking makes all these things much more difficult to implement, because they never feel quite right. A critical success factor, then, is to break through the conceptual barrier of throughput thinking, and leave utilization thinking behind, back in the era of Frederick Taylor and P.T. Barnum where it belongs.

1 thought on “Utilization thinking vs. throughput thinking

  1. […] Utilization thinking vs. throughput thinking […]

Comments are closed.