“Simplicity – the art of maximizing the amount of work not done – is essential.”
(The Agile Manifesto)
Once upon a time I worked with a finance department where the mantra was: “First Time Right”. The CFO had no tolerance for half baked work, and wanted every task to be executed thoroughly and completely. Submitting work with (even slight) imperfections or inadequacies was a cardinal sin. But depending on what the work is, “first time right” (which I’ll refer to as FTR in the rest of the text) is either great or the worst possible advice.
FTR naturally appeals to many professionals in the finance field. Activities such as data entry, financial modelling or the timely publication of financial results require precision and attention to detail. It’s hard to see a financial career taking off for a young professional who doesn’t possess those traits. When the consequences of errors are severe, the FTR mentality is entirely appropriate. That’s also the case for highly standardized (and automated) processes: there is no fundamental reason why the output wouldn’t be correct and final at the first attempt. It avoids the need for rework, one of the seven forms of waste in Lean. And speaking of Lean: other than finance people, many other knowledge workers have mass manufacturing of physical goods as their mental model of how work is supposed to work. Production in a plant using big machines is how we think about work when we are kids. It’s the examples our teachers use or what we see in the evening news, actual moving equipment making for better television than a person just sitting at a desk. By the time they enter the workplace, this is how most knowledge workers have been primed. Things like standardization, efficiency, low defect rates, benefits of scale (we’ll get back to that one!) seem inherently beneficial to pursue. So all in all, it is understandable many knowledge workers have a FTR bias, with the best of intentions. Sometimes different words, like “operational excellence”, are used. And yet, even though FTR has its place in knowledge work, in many cases it is a very detrimental mindset.
A first problem – a very mild one, to be sure – is that all that polishing and perfecting comes at a cost. That much is easy to understand for finance folks. Rework is wasteful, but then again so are overengineered solutions. That’s just a relatively minor productivity loss: the real trouble begins when work is less standardized, the expected output less well defined, or the workflows complex or interdependent with other processes. Knowledge work spans an enormous spectrum of activities and outputs. There’s a world of difference between ‘resolve the issues with these 50 blocked supplier invoices’ and ‘suggest five promising candidate molecules to treat this medical condition’. For the standardizable stuff, a mostly linear progression of actions will work: state the problem, scope out the solution, put in the work, done. This is FTR territory. But there’s an entirely different range of activities without upfront clarity, with work requiring gradual discovery of the problems, exploration of the various options and solutions, and analysis of their trade-offs. Obviously business issues don’t pop up with a neat label specifying where they sit on this spectrum, and it’s usually the ones in the murky middle causing the most confusion. Personally I’d treat an assignment to ‘figure out the best strategy to win this legal procedure’ as a case for discovery and exploration, but I’m sure there’s not a few people who would just try to first time right it.
OK, let’s say we’re convinced a project needs discovery and exploration… And by the way: in terms of value creation, this is where most of the knowledge work opportunity is. Creating value in a highly uncertain, complex process is what they do, so which ideas can we steal from them to reapply in other areas? The software engineering logic works as follows. If there is uncertainty or ambiguity, the critical requirement is to learn as fast as possible. In order to learn as fast as possible, we need (customer and stakeholder) feedback as fast as possible. And to kick-start that feedback loop, we’ll put something “commentable” in front of them as early as we can. (The name software engineers have for this something is “minimum viable product” or MVP. Don’t confuse that with a FTR product, it is not the end but just the beginning. It needs to be just good enough for the user to say something meaningful about it, nothing more!) Based on the MVP feedback, the team will reiterate and crank out another iterated version again with the intent to harvest feedback. And so on. If this process of iteration works well, the team will learn about the problem. But more often than not, the end customer will also learn (and change their mind) about what it is they want as a solution. Contrast this with a more traditional approach… The customer specifies, often in considerable detail, the specification and features of what they think they want. The software team is asked to commit to deliver, on time and within budget. Only to hear from the customer at delivery that the outcome is not really what they want after all. This is so common it’s a business cliché – and yet, the vast majority of companies and managers will still insist on getting… first time right.
If you were reading the previous paragraphs with a growing suspicion this is just a rewrite of “Don’t grade your own homework”, which discusses the importance of feedback, you’re not wrong. The different point this essay tries to make is the importance of speed. Getting feedback to flow at all is good, but getting it fast and frequently is much, much better. Software engineers know a second trick of the trade to make this happen. That trick is small batch size. Batch size, in this context, means the amount of work shuttling through the workflow between different teams. To understand why small batch size is beneficial, we need to get back to discussing scale, as I promised you.
90% of the time, “scale” is used as part of the phrase “economies of scale”. We think of economies of scale as a good thing, again because we are primed to think about physical manufacturing as our mental model for work. Producing at scale makes sense: the basic idea is to interrupt production of your widgets as little as possible. Other than equipment malfunction or maintenance, the only reason production needs to stop is the reconfigure the production assets to produce a different specification product. The assets earn nothing while they’re not running. So if you can minimize the reconfiguration time, unit economics will improve. Nice, long, uninterrupted production runs are good – they give us economies of scale. To a certain extent, this idea of continued unchanged production applies to knowledge work too. The reconfiguration time of knowledge workers from one task to another also leads to a loss in productivity. This is mostly due to context switching: it takes mental energy and adjustment time for the brain to get into a different task, which is a feeling I’m sure you’ve personally experienced. Too much context switching – multitasking – is not good, so it would appear knowledge workers need to organize their work in large, uniform batches.
But what is true for individual knowledge workers is not necessarily true for a complex workflow, and definitely not for gradual-discovery-and-reiteration work. Bigger batches take longer to create and move through the system, delaying the feedback loop. Once the feedback comes in, bigger batch size also means more rework. Rework is an inevitable part of the process. Some of it is desirable – turning customer feedback into valued product components – and some of it is unavoidable – such as fixing bugs. Testing can also be seen in this light: that’s just another feedback loop, only from us to us instead of customer to us. Bugs are harder to find in a bigger chunk of code, and developers won’t remember its purpose and context any more if it’s been a while since they worked on it. Furthermore if the batches are bigger, it will consume more resources to run them through the testing protocols each time they are reworked – and more importantly, that means it will take more time before the product increments get into customer hands. These are incredibly powerful ideas and it is very easy to translate them from software engineering to other types of complex and creative knowledge work, whenever feedback loops are critical to the end result. In all these cases, we need to design for economies of speed instead of economies of scale.
Proponents of these Agile principles are convinced they are the secret to breaking through the old paradigm that for a project, “out of cheap, fast, and high quality, you can get two but never all three.” The intuitive (but wrong) assumption behind that old saying is that the amount of work is largely fixed regardless of how you organize for it, and that the best way to get faster or better is to throw more resources at the problem. This has been thoroughly debunked in software engineering, enshrined slightly tongue in cheek as Brooks’ Law: “Adding manpower to a late software project makes it later”. For complex, creative knowledge work we can indeed conclude that setting up small batched, high velocity workflows is the key to achieving all three components. Cheap, because it will minimize investment in low value features and avoid big amounts of rework. Fast, because it will put product in the hands of the customer early on. And high quality, because mistakes or unnecessary features are detected early and frequently.
In conclusion, there’s certainly a place for first time right and economies of scale in knowledge work but these approaches should be limited to highly standardized, low variation workflows. In those case, the analogy with manufacturing works. But it is a recipe for disaster in cases of high variability, substantial uncertainty or innovation and creativity. For those we don’t need a production machine but a learning machine, consisting of fast and frequent feedback loops on small, incremental batches of work.