DUSK Loop


The Problem with Current Models

Current risk management models have three major shortcomings. First, commonly used risk categories assume that probability and magnitude are always distributed normally. Second, most risk models do a tremendously poor job of accounting for unpredictable events. Third, risk management can give a false sense of security (Indeed, the main benefit of actively considering risk is actually the mindset it confers). In 1970, the average tenure of a company listed on the S&P 500 Index was 35 years, today it is less than 20 years. In an increasingly complex world, it is imperative that leaders embrace a new understanding of risk and knowledge. The risk management models of yesterday are leaving value on the table. No model is true, and some are dangerous. Organizations are long overdue for a model that usefully illuminates as many risks and opportunities as possible. One that suits an age of exponential technologies, super-social networks, and deep interconnectedness. 


Understanding Risk

The outcome of betting $2 on a coin toss is far less complex than that of investing in the development of a new product. The former, a simple outcome - either you win or you lose. The latter, a complex outcome - a global effect. For example, if a product launch fails, not only have you lost a fair chunk of money, but your investors might lose faith, your team might lose morale, you might lose great team members. The negative effects ripple through your company. So, risks can be simple or complex.


That covers magnitude. Many business decisions today fall in the complex category. Risks also have probability. Some are distributed normally (Dotted Line). Some have “fat tails” (Solid Line). Normally distributed risks include coin flips and product launches at large companies. Fat-tailed risks include power grid failures, natural disasters, and economic busts. To drastically oversimplify:


When a risk is fat-tailed, it means that the day-to-day probability of a high-impact (tail) event is high, relative to risks that are normally distributed.


These are two dimensions of risk that current methods currently assess, albeit wrongly. But risk is a three-dimensional beast, with a dark side. And its most dangerous side is cloaked in shadow.


Intertwined: Risk and Knowledge

We only have a limited ability to predict things. Our ability to predict things stems from our knowledge of the world. In other words, we can see the future, but merely a glimpse. And we can grow that view through inference and deduction. Yet, we are still met with surprises. Isaac Asimov did not foresee the internet. Google’s invention was not predictable. 

Figure 1. A bell curve (dotted line) vs. a fat-tailed curve. X is magnitude, Y is probability. Notice the “tails” of the fat-tailed curve have a much higher probability.


The tragedy on September 11, 2001 was, too, a surprise. All three of these things issued deep ripples of change across the world in massive ways. While today, people might attribute indicators that hinted at these events' future occurrence, the fact is that hindsight is 20/20. If people did see these things coming, they wouldn’t have impacted us as deeply. This our third dimension of risk: knowness. To once again oversimplify (for that is both the usefulness and fault of any model), there are three levels of knowness from the perspective of an organization. First, knowledge. Knowledge is what the people of an organization have observed in the world. Second, hypothetical. Using what they know about the world, leaders constantly imagine possible scenarios (which have only occurred in their mind) including possible risks. Third, the unknown. Simply, what you don’t know that you don’t know. But, the more you know, and the more you hypothesize, the more you reduce the space of black swans that can surprise you. Think of the below diagram as a tree or a path:

Fig 2. Hofstadter. The relationship between known possibilities and impossibilities (axioms and negative axioms), hypothetical ones (theorems), and the unknown.


The quantity and quality of your knowledge about the world determines how many possible scenarios you can foresee. People use analogies, synthesize stories, and so forth to do that. The trunk of your tree or path of knowledge is what drives the growth of its branches or subpaths. Since we can imagine far more events than we will ever experience in our lifetimes, the space of hypotheticals is much larger than the space of knowledge. One might now wonder if the secret to conquering the unknown is to grow their base understanding (knowledge) to the point where they could imagine all possible outcomes. Intuitively, it makes some sense. Still, no matter how much one comes to know about the world a state of omniscience, of all knowing, is impossible. Quantum physics shows us as much. The dark space around the tree in Figure 2 might grow smaller as our tree of knowledge grows, but it will never fully disappear.


Intertwined II: Risk and Technology

There is a close relationship between knowledge and risk, and there is also a close relationship between technology and risk. When our tree grows, so too does the space around it. This is due to changes in our environment, and can be attributed mainly to one special cause: technology. 


“Prediction requires knowing about technologies that will be discovered in the future. But that very knowledge would almost automatically allow us to start developing those technologies right away. Ergo, we do not know what we will know.”

-Nassim Taleb


Technology is the lattice upon which the human consciousness crawls ever upwards. Technology is doubly an exploration of the unknown, and a creator of the unpredictable. We use technology to mitigate risks, and our reliance on technology can itself be a risk. Technologies might have unforeseen consequences; indeed, technologies such as the Internet, passenger planes, wind turbines, and machine learning have all had surprising impacts. The world has witnessed a technological Cambrian Explosion since the industrial age. That unparalleled exponential growth has created an immensely complex, interconnected environment. If you were a epistemological cartographer, looking at our knowledge map in real-time, you would see the unknown growing faster than our imagination, let alone our knowledge about the world.


Discovering Risk

Risks are discovered in only two ways. First, they happen. We know (discover) something is a risk, because it has happened before. For example, the inability of Texas’ power grid to handle a large increase in demand, leaving thousands without power for days. This could easily happen elsewhere. Second, they are imagined. We know (discover) something is a risk, because it is physically possible. E.g. Your local wine cellar is struck by a pink Jaguar Land Rover, destroying a huge amount of goods.  The second method of discovery is far cheaper. Because that didn’t actually happen, but it’s a plausible enough (although silly) scenario to merit a real solution. However, no one person can imagine all possible scenarios. A common fallacy is attributing an indicator to an unforeseen event, post facto. The only people that were not surprised by 9/11 were those involved. Yet, some of us insist that there were signs that were ignored. The fact of the matter is, sometimes we do not see things coming. Some risks can be imagined, some are “Black Swans” and will remain unforeseen.


Solving Known and Hypothesized Risks

The first major shortcoming of current risk management frameworks is that they blatantly mischaracterize risks. It is not always a matter of picking a number between 1 and 10 for both probability and impact. This is the case for only one category of risk, Normal x Simple (Quadrant 1A). 

Figure 3. A stereotypical “risk management” framework, with oversimplified categorizations.


In reality (which is what we care about here) the current, simple way of looking at risk (Figure 5) gets us into trouble. We apply it to all types of risks, giving us a false sense of confidence about the predictability of things. The proper starting point is 


to focus on truly important events, which are complex. Events in quadrant 1B are normally distributed, but their effects can ripple throughout an entire system. Quadrant 2A is home to events that have strong impacts on a more frequent basis, but only locally. Quadrant 2B is, as Simba’s dad would put it, “The Badlands”. We would like not to go there, but it comes to us. Events from this quadrant inevitably arise, massively affecting the entire system in which they occur. There is no managing events in this category, only reducing vulnerability. 

Figure 4. Taleb’s real-life method for categorizing risks.


Humility in the Face of the Unknown

If businesses desire to be more stable, it is therefore important that they adopt a method that systematically reduces their vulnerability to the unknown. Since Black Swans are hidden by nature, so too are their impacts. Organizations must always keep in mind that there are risks that could swiftly cripple them, should they be operating on too thin of a margin, with too much leverage, with no surge capacity, with no reserves, within an overly specialized supply chain, and so on. The most important way to prepare for unknown risks is to focus on reducing vulnerability to fat-tailed risks with complex outcomes. In other words, becoming more robust. We can look towards the world of our own biology for inspiration. Humans have two of many things: eyes, lungs (which have a total of five compartments), kidneys, brains (in a way), ears. This built-in redundancy is essential for our resilience as a species. We also have organs that perform similar functions. Both your eyes and your skin play a role in detecting light, through ganglion cells, which helps maintain our vital circadian rhythm. Furthermore, our bodies can withstand a large amount of variance. We can survive days without water and weeks without food (Can you imagine if we would die after four hours without eating?). Simply put - you cannot manage unknown risks. Depend on preparation, not wishful thinking.


The DUSK Loop

Perhaps the greatest benefit of understanding the nature of risks is the mindset it confers. The concept is simple and should not be made needlessly complicated. However, a formal process might allow for the mindset to more quickly  scale to a team. More eyes = better discovery. More perspectives = better understanding. More minds = better solving. More memories = better knowing. It also offers an opportunity for several people to learn about the nature of risk experientially. The DUSK Loop has four steps:


Discover → Understand → Solve → Know

Discover

possibilities, both positive and negative. Observe the world. Talk to people. Take the time to think. Hypothesize possible risks based on what you know. 

Understand

the nature of discovered risks. Is their probability normal or fat-tailed? Is their impact simple or complex? Be humble that there will always be risks that are unknown to you.

Solve

based on the nature of your known risks. Plan accordingly. Ensure that the problem drives the solution, not the other way around. Prepare a buffer against the unknown. Protect.

Know

if your plans were effective. Was your preparation sufficient? Translate what you’ve learned into knowledge. Pass it along. Teach.


Using DUSK is like moving a warm lamp above a map drawn in invisible ink, helping you chart a better path forward. Similar to John Boyd’s OODA (Observe, Orient, Decide, Act), DUSK is a loop. It begins with Discover, and each step feeds backwards. 


Evaluating DUSK

This framework is designed to be lightweight and therefore has next to no costs (asymmetric upside). Still, in the context of an organization it can be important to justify any activity. But it is rather difficult to assess the effectiveness of a system from within a system. In fact, it’s impossible. In order to determine if the DUSK Loop is effective at improving organizational robustness, we need a standalone method of cost-benefit analysis. Determining cost is simple - one can account for this in simple terms of hours. Determining the benefit is the hard part, because the benefit can take a long time to materialize. One way to look at it is that organizational robustness translates into survivability, which is a binary metric that appears over a long period of time. Either a company continues to operate, or it does not. Another lens is through employee count. Budget deficits could be another. Generally, massive negative volatility could be considered to be an indicator of poor organizational robustness. On the flip side, strong, lasting rises in core metrics could be a sign of an organization’s ability to effectively take risk. I won’t pretend to have the answer, but these are useful starting points.


Personal Use

It merits reiterating that the main benefit of understanding the nature of risk and knowledge is the mindset that it confers. An individual can, with a little effort, improve how she makes big decisions by using the DUSK Loop. It starts with the individual, as a framework for evaluating the fragility of his organization. It’s designed to scale, to benefit from multiple perspectives. So as the catalyst, you should favor decisions that improve robustness and keep an eye out for opportunities that are with asymmetric upside. Transitioning from risk management to risk taking might seem daunting, but it’s the way. Adopt a sense of humility about how truly complex our world is, and your organization will benefit. When you make decisions, be sure that you’re held accountable. Hold others to a similar standard and yourself to a higher one. Government executives, especially, should take note of the outdatedness of their current risk management models, which have proliferated across the sector. The longer organizations rely on overly-simplistic models, the more fragile the critical infrastructure that they develop will become. 



References

Boyd, John (1976). Destruction and Creation.

Godel, Kurt (1962). On Formally Undecidable Propositions of Principia Mathematica and Related Systems.

Hofstadter, Douglas (1967). Godel, Escher, Bach: an Eternal Golden Braid.

Mandelbrot, Benoit (1982). The Fractal Geometry of Nature.

Mandelbrot, Benoit (    ). The (Mis)behavior of Markets.

Taleb, Nassim (2007). The Black Swan.