For thirty years, companies have been optimising themselves to run perfectly. The result is organisations that fail completely.
In the winter of 2021, a winter storm named Uri swept across the southern United States and killed over 240 people. The majority of deaths occurred in Texas. The primary cause was not the storm itself, which was severe but not unprecedented. It was the failure of the Texas power grid — a failure that left millions without heat for days at temperatures that dropped below -20°C in some areas.
The Texas grid had been optimised for efficiency. Winterisation — the process of hardening power generation equipment against cold temperatures — costs money. Texas winters are, historically, mild. The expected value calculation, run by market participants operating under the state’s deregulated energy framework, consistently returned the same answer: winterisation is not worth the investment. The grid ran lean. It failed catastrophically.
This is not a story about Texas. It is a story about a doctrine that has quietly come to govern how most large organisations in the developed world are managed — and what happens when that doctrine meets a world that has become, structurally, less predictable.
THE EFFICIENCY DOCTRINE
The intellectual foundations of modern operational management were laid, largely, in the 1980s and 1990s. Just-in-time manufacturing, pioneered at Toyota and adopted globally, eliminated inventory slack by synchronising production precisely with demand. Lean management systematically identified and removed waste. Shareholder value maximisation, theorised by Milton Friedman and operationalised by a generation of business school graduates, made efficiency not just a virtue but an obligation.
The results were, for a long time, extraordinary. The three decades between roughly 1990 and 2020 saw global supply chains achieve a level of optimisation that would have seemed implausible to an earlier generation of managers. A consumer electronics company could design a product in California, source components from twelve countries across Asia, assemble it in China, and deliver it to a customer in Europe within days — all at a price point that made the product broadly accessible. The system was, by any narrow definition of the word, efficient.
Efficiency is a property of a system that is working. Resilience is a property of a system that survives when it stops.
Then, in March 2020, a respiratory virus began disrupting freight flows out of Wuhan. Within weeks, the global supply chain revealed itself to be not a robust system with some inefficiencies, but a fragile system that had been confusing absence of disruption for strength.
THE REDUNDANCY PROBLEM
Redundancy is the enemy of efficiency. A warehouse full of safety stock is capital that is not working. A supplier relationship maintained as a backup when a cheaper primary supplier exists is money left on the table. A process with built-in slack is, by definition, not running at capacity.
These are the calculations that drove 30 years of supply chain optimisation. They are not wrong calculations. They are correct calculations under a specific set of assumptions: that the operating environment is stable, that disruptions are rare and short, and that the cost of failure — when it comes — can be absorbed.
The last decade has challenged each of those assumptions in sequence. The 2011 Tōhoku earthquake and tsunami disrupted the global automotive supply chain for six months, revealing that single-source dependencies on specific Japanese manufacturers had been embedded invisibly throughout the industry. The 2017 NotPetya cyberattack cost shipping company Maersk an estimated $300 million in ten days, when malware spread through a software update mechanism to systems that had been connected in the name of operational efficiency. The COVID-19 pandemic disrupted simultaneously — and for a sustained period — almost every node in global supply chains. The 2021 blockage of the Suez Canal by a single container ship, the Ever Given, held up an estimated $9.6 billion in daily trade for six days.
The system had been engineered to be efficient. Nobody had engineered it to recover.
WHAT RESILIENCE ACTUALLY REQUIRES
The concept of organisational resilience has attracted significant academic and consultancy attention in recent years, often without much precision. Resilience has become, like agility and innovation before it, a word that is used frequently enough to mean almost nothing.
But the underlying concept is precise, and the research behind it is specific. Resilience — the ability to absorb disruption and return to function — requires, at a structural level, things that efficiency explicitly removes. Redundancy: duplicate systems, backup suppliers, reserve capacity. Modularity: the ability to isolate failures so they do not propagate. Diversity: multiple approaches to the same function, so that a weakness in one is not a weakness in all. Slack: time, money, and attention not fully allocated to immediate production.
These are not the properties of an optimised system. They are properties of a system designed to survive when optimisation breaks down.
The most efficient version of a system and the most resilient version of a system are not the same system. We have spent thirty years building the former and calling it good management.
Some organisations are beginning to learn this. After the pandemic disruptions, Apple — which had been a paradigmatic example of globally optimised, geographically concentrated supply chain management — began quietly diversifying its manufacturing footprint, shifting some production to India and Vietnam. TSMC, the Taiwanese semiconductor manufacturer whose concentration of global chip production had become a subject of geopolitical anxiety, announced major new fabrication facilities in Arizona, Japan, and Germany. These are expensive decisions. They make the supply chains involved measurably less efficient.
They are also, by any honest assessment, the correct decisions.
THE MEASUREMENT PROBLEM
The deeper difficulty is that efficiency is easy to measure and resilience is not.
Return on assets, inventory turnover, operating margin, cost per unit — these metrics are precise, comparable, and legible to shareholders and boards. They improve when redundancy is removed, when slack is eliminated, when systems are tightened. They are the metrics that drove three decades of optimisation.
The cost of fragility, by contrast, is invisible until it is catastrophic. It does not appear on the balance sheet. It shows up, eventually, as a $300 million malware bill, or a winter storm that breaks a power grid, or a pandemic that exposes a supply chain as a single point of failure. By then, the executives who made the efficiency decisions have generally moved on.
This is not a new insight. The economist Nassim Taleb has been making a version of this argument for 20 years, in progressively more exasperated books. The problem is not that the insight is wrong. It is that it is structurally incompatible with how most large organisations are governed — quarterly reporting cycles, efficiency metrics, shareholder primacy, executive compensation tied to short-term performance.
You cannot build a resilient organisation by accident. You cannot stumble into it by optimising correctly. Resilience requires a deliberate decision to make the organisation less efficient than it could be — and to hold that decision against the constant pressure of a system that rewards the opposite.
That is a harder thing to sell than a lean supply chain. But the alternative, as Texas discovered, is a grid that works perfectly until the moment it needs to work most.





