The Law of Unintended Consequences

There is clearly a benefit from improving organisational alignment, so it is reasonable for us to want to do something about it. But we also know that more often than not, what we do doesn’t seem to have the desired effect – or the desired effect is short-lived. It seems as if we understand that outcomes are unpredictable, but we tend to act as if this were not true. So we seem to saying one thing, but our actions suggest otherwise – a great example of the difficulty involved in double-loop learning!

As part of our exploration, we have identified some characteristics of organisations that make it difficult to achieve lasting change: complicatedness; complexity; openness; human interpretation and human intent.

The concept itself isn’t new. The idea of unpredictability in social environments was formally described almost a century ago by Robert Merton, a professor at Harvard University. Merton listed the causes of unpredictability, and popularised the Law of Unanticipated Consequences. Merton identified five causes of, or factors resulting in, unanticipated consequences. The first is ignorance or lack of sufficient information. We don’t know everything about the organisational context in which we wish to intervene, not only because we haven’t invested the time and effort, but because there is much that is unknowable. But, we are nevertheless required to act based on the information we have, i.e. out of bounded rationality. The danger is that we stop short of gathering the information that is necessary and available given the resources we have, and thus act out of avoidable ignorance. This often happens because we think we know more than we actually do or because we believe information is valid when it isn’t.

Next, Merton identified errors as a possible factor: in our assessment of the situation; in determining the appropriate end-state; in identifying the most appropriate intervention; and in executing the intervention. A common default approach is to assume that what has worked in the past, or elsewhere in our experience, or indeed that of someone else, will also work in our situation. The third factor he listed is the ‘imperious immediacy of interest’ or the urgency to act. There are two risks associated with hasty action: we may act in our interest but not that of the broader group and it is easier under these circumstances to fall prey to emotional biases. Fourth, he considered acting on our values, i.e. doing something we know to be aligned with what we consider important for us, even if rationality suggests otherwise. An example of this is US President Kennedy’s moon-shot goal, which has continued to deliver benefits that no one anticipated at the time it was articulated. One wonders what might have happened had the President chosen instead the abolition of the use of fossil fuels.

The fifth factor looks very much like values-based decisions, but involves self-fulfilling prophecies. What we say influences our perception of reality. Once we have articulated a goal, or concern about a certain consequence, that articulation can influence the likelihood of the event occurring. It is then no longer possible to determine what might have happened had we remained silent. And this serves as a good segue into an example that illustrates unanticipated consequences and the difficulty of separating cause and effect. In 1973 Black, Scholes and Merton (the other Merton’s son) proposed a model that enabled the calculation of the price of options, which earned them the Nobel Prize in economics. Within a couple of years, their model was able to explain the pricing for the vast majority of options that were traded on the exchange. What is not clear is whether the model did such a great job of explaining what was happening in the market or whether everyone in the industry started using the model to calculate prices for the options.

Another example relates to the dysfunctional effects, unanticipated of course, of pay-for-performance. We have previously looked at what can happen when senior executives are measured, and remunerated, on the basis of market capitalisation. This practice possibly does more damage to organisations than benefits them. The effect is not limited to senior executives. Salespersons remunerated based in individual performance tend to be less cooperative with their colleagues than those rewarded for the performance of the group.

In 1990 the use of helmets for bicycle riders was mandated in the state of Victoria in Australia. This was based on analysis that showed helmets reduced the risk of head or brain injury by as much as two-thirds. The total number of head injuries did indeed go down, but so did the number of cyclists. Fewer people cycled largely because of the number of younger cyclists who seemed to think helmets are unfashionable. The incidence of injuries per cyclist actually increased, possibly because of another unintended consequence – risk compensation, i.e. cyclists taking more risk because they now felt better protected than before.

In closing, a quick aside relating to the gap between our knowledge of unpredictability in organisations and the way we behave, i.e. between what we know and what we do. How do we manage this gap? One way is to make light of the knowledge. Possibly in that spirit, the Law of Unintended Consequences is often placed in the same category as Murphy’s Law, which of course states that if anything can go wrong, it will. Perhaps it makes it easier for us to ignore the former if we treat it with same levity as the latter.

Let me know what you think.


If you are interested in learning more about organisational alignment, how misalignment can arise and what you can do about it join the community. Along the way, I’ll share some tools and frameworks that might help you improve alignment in your organisation.