“For every complex problem there is an answer that is clear, simple, and wrong.”
(H.L. Mencken)
Project managers and surgeons call it a “post-mortem”. The Navy SEALs conduct “after-action reviews”. Software engineers have their “retrospectives”. The concept behind all of these terms is the same: take a hard look at what happened in the past, to capture learnings for the future. Surgeons and project managers examine corpses, to determine the mode of death or cause of failure. SEALs always reflect, regardless of outcome. Failure has no monopoly on learnings, analyzing success can be educational too. (They are especially hard nosed about instances of luck determining the outcome… luck is definitely not their idea of a strategy. We’ll soon get back to this.) Software engineers embrace the same idea of conducting reviews in all circumstances. The topic is covered in the 12 principles of the Agile Manifesto :
“At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.”
If it works for surgeons, SEALs and software engineers, there is no reason it wouldn’t be useful in any other field of knowledge work. (And yes, I consider the SEAL-business knowledge work.) As the saying goes: “Good judgment comes from experience, and experience comes from bad judgment.” Capturing learnings from previous events makes obvious sense. For most people, the sense of urgency to do this will be higher when staring at the smoldering remains of a crash, more so then when spraying champagne around on the highest step of the podium. But good times or bad, as simple as the retrospective sounds in theory, it can be surprisingly hard to execute with high quality in practice. In the rest of this essay we’ll analyze what makes it hard and how we can address these obstacles. It is worth the effort: excellence at retrospectives is an organizational superpower.
Generally speaking, the first step in a retrospective is identifying the thing (or things) you don’t like about a recent project or process. The second step is figuring out what would need to be true for those bad things to never happen again. And the third step is taking those actions. Each of these steps is simple, but none of them are easy.
Even just getting people to take the first step – identifying problems – takes deliberate managerial effort and energy. If a project ended successfully, nobody will have much enthusiasm to look for things to dislike. “Can’t we just relax and celebrate our success for once? All is well that ends well, right?” Well… no. Not in a professional environment. It is incredibly important to distinguish good outcome from good process. Michael Mauboussin has the following wonderful table in his book “The success equation”.
Good Outcome | Bad Outcome | |
Good Process | Deserved Success | Bad Break |
Bad Process | Dumb Luck | Poetic Justice |
I can’t think of a better illustration of the difference between professionals and amateurs. Professionals are not content with a single good outcome, they want to deliver an ongoing series of reproducible and – to the extent possible – predictable and controllable good outcomes. Which requires the kind of methodical, long term oriented, dull marginal gains focused approach that nobody wants to hear or talk about. When outcomes are good, analyzing process will feel pointless and boring.
When the project ended in tears, people may have even less appetite to reflect. “Time to move on, we can’t change the past …” If analyzing success seems pointless, analyzing failure is painful and potentially embarrassing. This is less the case in organizations with an established retrospective routine, where it is just the way things are done and nobody questions it any more. We’ll see later on how an organization can get to that state, but until then it will take a determined and gritty leader with a lot of energy to start implementing a good retrospective practice.
Which brings us to another issue with the problem identification step. Unless the subject matter is rather trivial, writing down a precise problem statement and identifying the root cause (or causes) is hard. The difficulty is not so much the techniques to be followed. There’s descriptions and trainings out there on e.g. the “5 whys” originally developed by Toyota and popular with professionals in the Quality field. By asking and answering “why” a number of times, the retrospective team will progressively drill down on the deeper underlying root causes of an issue. It is not the technique, but the nature of the problem itself that often makes the retrospective difficult. This is from an infoQ interview[1] with University of North Texas professor John Turner:
“The literature has identified three types of problems: simple, complex, and wicked. Churchman initially identified these types of problems in 1967. Simple problems have a consensus on the problem and solution; complex problems involve agreement on the problem, but not on the technique for solving the problem. Wicked problems include those in which there is no consensus on the problem or the solution.”
Most real life retrospective situations will involve a complex or wicked problem. In “The bottleneck” we already encountered examples of unexpected effects emerging as a result of interaction between the different parts in a workflow, even a very simple one. Local problems, cleanly falling within the boundaries of one team, are usually simple problems: easy to spot and resolve. And as a result, these are rarely the topic of a thorough retrospective. Retrospectives on complex problems won’t work unless we analyze the entire end to end system. As a result, the retrospecting group needs representation from all teams in the system or workflow. This group should then define the problem statement in terms of the goals and output of the entire system. But we know understanding the issues at system level is very hard, and therefore defining a good problem statement is also hard – a friend of mine says having a good problem statement is 80% of the job. That may sound strange, but the individual teams will come in with a different view of the problem: their local team context, not the system perspective. Combining these into the system view will gradually create the consensus on the problem statement which Turner refers to in the quote. And once there’s agreement on the problem, the team can (finally) begin to drill down on the underlying root cause or causes.
As we saw in Turner’s quote, this consensus is far from guaranteed. When the team has a so called wicked problem on its hands, that’s not just some extra special, even harder version of a complex problem. The earlier reference to Churchman relates to an editorial he wrote for Management Science:
“(…) that the term ‘wicked problem’ refer to that class of social system problems which are ill-formulated, where the information is confusing, where there are many clients and decision makers with conflicting values, and where the ramifications in the whole system are thoroughly confusing.”
With wicked problems, something always stands in the way of a complete and final solution. The social sciences are a rich source of examples, issues such as obesity, drug trafficking, climate change, smoking or abortion. It’s impossible to get everyone agreeing on a precise description of the problem, let alone find the root cause or come up with solutions. If there is some solution at all, at best it will be temporary. The dynamic between participants is not a matter of limited local context, but of different value systems and opinions. These differences can’t be resolved by taking a system-wide view. Some will argue that smoking is a harmful activity potentially forcing a health care burden onto society, others will argue any individual should be free to make their own choices. Such discussions can never be resolved in a common problem description, because the participants reason from different conceptual frameworks. One participant’s proposed solution may be another participant’s problem definition!
But even if a wicked problem can’t be fully solved, that doesn’t mean a retrospective is without value. If an organization feels the need for a retrospective, that usually implies there is a crisis of some sort. Going into the retrospective analysis, that organization won’t necessarily realize yet the problem is wicked in the first place. Attempts to create consensus on a joint problem statement will gradually expose the different perceptions and opinions. That can generate broad and shared awareness, not of the system-wide view but of the different mental frameworks. Such understanding is valuable in its own right. And even if the wicked problem can’t be solved once and for all, the retrospecting team may agree on some actions to tame it a little and make modest advances[2].
These inherent difficulties we’ve seen – the nature of the problem itself and the need to build a system-wide view – are enough to make retrospectives a challenging undertaking. And we haven’t even touched on the final, make or break factor yet: trust. Participants have to feel safe, personally and in their position within the organization, to freely speak their mind and discuss. Frankly speaking, the required level of psychological safety is a bar the majority of organizations won’t clear. Companies with a policy of punishing mistakes or adversity can save themselves the trouble of organizing retrospectives. Their incentive structure is incompatible with a culture of shared learning. And even if company culture is sufficiently mature, it will still require a mature and strong leader to keep the two no-no’s – finger-pointing and the other side of the coin, silent or defensive behavior – out of the retrospective.
That is hard, but it is not impossible[3]. And it is organizational muscle that can be trained. The more “no blame retrospectives” are conducted, the easier they will get. When people experience they aren’t singled out or punished for things that, let’s be honest, happen everywhere and even to the best of us, they will feel increasingly comfortable candidly owning up to what they could have done better. For the facilitator of a retrospective, a good starting point is reminding all participants that nobody comes to the office with the intention to do a bad job or piss off their colleagues. As we’ve already seen, many problems or apparent mistakes are issues emerging at the system level. So even in cases where the root cause appears to be human mistake, further digging will usually reveal deeper issues. Was the team understaffed, untrained, unaware of important information, forced to cut corners to meet a deadline…? Just identifying who’s at fault is probably wrong (by which I mean intellectually wrong, not morally wrong) or at the minimum a little simplistic, and anyway not a very useful learning. In successful retrospectives, the focus is constantly on the learnings, not the individual specifics. I’ve worked with a team that celebrated the most “glorious failure” at every quarterly retrospective of their Objectives and Key Results (OKRs)[4]. These people were not only veritable retrospective machines, they were also unafraid to try new things. Try getting this mentality – we’ll deliver something great or we’ll learn something really interesting – from an organization with a blame-seeking culture.
So yes, obviously none of this means there won’t be consequences when people structurally fall short of standards, aren’t willing to embrace and act on learnings, or otherwise demonstrate structurally inadequate behavior. We’re not all flower-power here, and nothing says the actions coming out of the retrospective can’t include retraining, reassignment or career reorientation. But for your garden variety incident or incremental improvement opportunity, the wiser approach is to give your knowledge workers the benefit of the doubt and lead them in teasing out some system-wide confluence of factors. The blameless retrospective can help address complex problems with a solution that is imprecise, strenuous – and useful.
[1] https://www.infoq.com/articles/flow-system-leadership-complex-problems/
[2] Terms introduced by business thinker Gary Hamel
[3] But it is hard, even for grizzled veterans… see for example https://dzone.com/articles/the-myth-of-the-blameless-retrospective
[4] https://en.wikipedia.org/wiki/OKR