Anti-fragile
Nassim Nicholas Taleb
Nicholas Taleb is a very opinionated author on risk assessment and decision making. One of his main contentions is that people understand risk very poorly and do much too little to protect themselves from bad “luck” or expose themselves to good “luck”. In particular, he says that it is neither possible nor practical to predict unlikely events, so what you must do is prepare yourself to take advantage of unlikely good outcomes and limit your exposure to unlikely bad events. Of these two, it is much more important to limit your downside exposure. While this might seem to be a discussion about financial risk, Taleb suggests that this approach is applicable to many of the decisions we must make in life and society.
This book takes the argument one step further by describing outcomes in terms of fragility and its opposite anti-fragility. Fragile has it usual meaning. Things which break easily are fragile. Most people think of robust or resilient as the opposite of fragile, but Taleb observes that robust things simply resist breakage. Robust is indifferent to shock. The true opposite of fragile is anti-fragile. Anti-fragile things get stronger as a consequence of shock or stress. At first, this might seem odd, but this is exactly what happens when you temper steel (a 700-900F degree shock), have a heavy workout (tear muscle micro-fibers), or break a bone. These systems respond by getting stronger and more resistant to further shock. This would an interesting, but unimportant distinction, except that Taleb observes that many things in the world are becoming more fragile without people recognizing the fragility, or even worse, incorrectly believing that their interventions are decreasing fragility. If one accepts Taleb’s framework, then the goal is to eliminate exposure to negative fragile circumstances and increase exposure to positive fragile circumstances. For the more daring, you might bet on fragile outcomes to take advantage of the majority’s ignorance of fragility. It is a waste of energy is to try to predict when these events will occur or to believe that unlikely events do not matter. Perhaps the most important events in life are “unlikely”.
Taleb has a unique style of writing that makes writing this summary challenging. With the main themes laid out above, the following observations struck me as the most interesting explanations and applications of the concept of fragility and anti-fragility.
Evidence and prediction
Society has become more (apparently) numeric and people devote more energy to using these numbers to make predictions. For some kinds of events, this is very appropriate and practical. But when these methods are applied to uncertain circumstances, people mislead themselves. The effect is greatest when the predictions apply to unusual events or to events that are part of complex systems. For example, small earthquakes are very common and practical predictions about the “next” earthquake can be made, and of course such small earthquakes really do not matter. There is no consequence for a bad prediction. Large earthquakes are not common and it is difficult to predict when the next one might occur in any practical way – and this inability matters a lot. It seems certain that the New Madrid fault will slip in the future and damage cities located in the region. But it is not possible to practically predict when this might occur. When do we tell the residents of St. Louis to evacuate the area and the authorities to shut off natural gas and electricity? Making an actionable prediction on timing is not practical, but it is possible to act to decrease the damage from the inevitable. Because increasing the robustness of a city to an earthquake can be expensive and the return on the expense seems tiny, people often use a probability argument against taking action. This can be summarized as “there is no evidence of a looming disaster, so there must not be a disaster looming”. In other words, people interpret the absence of evidence to be evidence of absence. This is a significant logical fallacy. In recent years, real estate professionals reported that since house prices had been increasing for close to 60 years, there really was no possibility of future prices being lower. The absence of wide-spread price decreases was taken as evidence that wide-spread price decreases were impossible. Obviously, wide-spread price decreases were possible and had a high impact. Taleb popularized the phrase “black swan” to describe exactly this sort of outcome. This is not a problem that can be solved with more data. In business and economic decision making, reliance on data causes severe side effects – data is now plentiful thanks to connectivity, and the proportion of spuriousness in the data increases as one gets more immersed in it. A very rarely discussed property of data: it is toxic in large quantities – even in moderate quantities.*
A second way that prediction gets undermined is through complex interactions. Consider the value of a house. It depends in the house itself, the neighborhood and city it is part of, interest rates, the local economy and more. These relationships are not simple and a house price may be insensitive to one of more of these over some range. But conditions may arise where prices become very sensitive, and in recent years that sensitizer (fragilizer) was debt. When people bought more house than they could afford, assuming that demand would continue to increase prices, they became fragile to falling prices (increasing their loan-to-value ratio) and to increasing interest rates. When their loan suddenly exceeded the value of their home and they had to refinance at higher interest rates, what had seemed to be a reasonable investment became a personal disaster. And because it happened simultaneously to millions of people, the market was flooded with houses that drove down prices even further and for people who were not over-exposed. These non-linear effects make it difficult to uncover the real relationships between causes-and-effects, and to make useful predictions about their likelihood. Being easy to analyze and explain after the fact is not the same as being able to predict beforehand.
A third problem relates to agency. The people in the best position to understand and educate about these risks often have an interest in the risk being ignored. It was not in the interest of real estate agents to say that houses were over-priced or represented risky investments. It was not in the interest of banks to deny marginal borrowers loans. It was not in the interest of rating agencies to downgrade the debt of their customers. When you combine the difficulty of predicting rare events, the complexity of the relationships causing an event with the difficulty of getting unbiased information, prediction begins to look like a sucker’s game. And this is one of the main contentions of this book – prediction is for suckers. And conversely, when you know that there are suckers making investments that might be the evidence that you need to bet against them.
Hormesis and Iatrogenics
A complex system, contrary to what people believe, does not require a complicated systems and regulations and intricate policies. The simpler, the better. Complications lead to multiplicative chains of unanticipated effects. Because of opacity, an intervention leads to unforeseen consequences, followed by apologies about the “unforeseen” aspects of the consequences, then to another intervention to correct the secondary effects…
Humans (mostly) like to avoid risky experiences. Few people will deliberately take poison or jump off of bridges without a really good reason. They correctly fear that these experiences could kill them. Humans also like to intervene. They may intervene to “improve” something, for example by innovation. They may intervene to correct something, for example by regulation. Some of these interventions are successful in that they achieve the desired effects and have no (negative) side effects. Hormesis and iatrogenics represent two phenomena that suggest why you may want to accept more risk and avoid more intervention.
Hormesis is the response of a system to an “insult”. Taking a small amount of poison is an insult to the body, but the body develops a bit of resistance to the poison. Up to a point, hormesis leads to tolerance of that poison and thus most robustness in the presence of poison (hormesis thus leads to some anti-fragility). One of the consequences of eating lots of vegetables is the hormetic development of resistance to many vegetable toxins. Heavy, but not light exercise is hormetic. Being insulted frequently helps develop a thick skin, and losing small amounts of money on investments helps develop risk tolerance. In recent decades, there has been increasing effort to eliminate risk from our lives. Hand sanitizers decrease our exposure to germs and decrease our immune systems preparedness. Air conditioning decreases our ability to tolerate heat. Home and yard appliances replace our muscles with machines and decrease our ability to control our bodies under load. Hormesis decreases our fragility. Another way of thinking about hormesis is as the consequences of random experience. When we find ourselves in unexpected circumstances, we have a chance to experience the unexpected and prosper from the experience. We may learn something new, gain an inspiration or learn how to tolerate something unpleasant. We may also have a wonderful enjoyable time. Seeking out random experiences ultimately decreases fragility.
Iatrogenics means “caused by the healer” and is a fancy way of saying side-effects. Interventions are intended to have a particular effect, but many interventions have additional effects. These effects can be predicted (decreasing interest rates eventually increases inflation) or unpredicted (Vioxx consumers have heart attacks). Side effects are so common that we often forget about them. Many kinds of surgery are safe, if you do not consider the frequency of death due to anesthesia, the risk of subsequent infection, and the long-term development of antibiotic-resistant microbes. This is his real point about iatrogenics. People often seek small certain benefits at the risk of unlikely but very significant damage. The failure to account for these high impact-low probability events when making predictions increases fragility. It is the asymmetry between small certain benefits and large uncertain costs that signals fragility.
Taleb observes that few fields produce valid theories – those are mostly in the hard sciences. In most social sciences, theories are really more like popular opinions that are not the result of experimental proof but the result of compelling narratives. He goes on to point out that for most of human history, the explanations for how things worked were narrative. Thus the Earth was at the center of the universe, the elements were fire, air, water and earth, and astrology linked the fate of individuals to the universe. All of this made sense to people, but today we live with different narratives. Narratives work best when they leave out details and complications because they spoil the narrative. Uncritical acceptance of these narratives supports naïve intervention.
“In theory there is no difference between theory and practice; in practice there is.”
Yogi Berra
A good narrative can provoke people into intervening in society or an organization without considering all of the consequences. The commonality of iatrogenics suggests that most interventions will fail. Interestingly, most genetic mutations are deleterious, most product launches fail, most change programs in organizations fail, most projects are late and over budget, most social programs do not achieve their goals – all of which suggests there is a shortage of understanding during the design of these interventions. Taleb does not suggest that people need more information or planning before initiating these efforts. Actually, he suggests just the opposite. Making predictions about these interventions may be futile because they are dominated by unknown factors.
Chance and selection
There are systems that depend on low probability events for their structure and success. Biological evolution and venture capital are two examples. In biological evolution, a species can be stable for a very long period. Essentially, during this period the environment is stable so that random variations provide animals no advantage and tend to be abandoned. But when the environment changes, animals carrying certain changes will suddenly find that they are advantaged and be able to reproduce better than animals without those changes. This method does not act through survival of individuals as much as through survival of genes into successive generations. Taleb makes the point that sometimes hormesis acts at a species or organizational level. It is not so much ‘what doesn’t kill me, makes me stronger’ as it is ‘what doesn’t kill me, but kills everyone else, makes the species stronger’. He describes restaurants as an example of an anti-fragile social evolutionary selection system. In most cities restaurants are a thriving industry, but many restaurants go out of business. As restaurant owners look at others’ failures, they make changes to improve their own survival. The changing tastes of people keep constant selection pressure on the industry. Under these conditions, restaurants (as a class) are always optimizing their fitness by keeping useful changes and closing when they fail. HOWEVER, if something were to artificially keep failing restaurants from closing, then they would soon lose their focus on their patrons’ needs (fitness) and they would become collectively more fragile. Patrons would discover that a bad restaurant is worse than no restaurant and eat at home. The industry would die because people could believe that all restaurants were bad. Taleb asserts that a system’s anti-fragility is often the consequence of an individual’s fragility. The human species may be stronger because “weak” individuals die without reproducing as much as “strong” ones. Companies may become stronger because they can replace people with “better” people. Weak companies are replaced by better companies and the economy becomes less fragile.
Venture capitalists (VC) are another domain where selection pressure is high. Most start-ups fail, but successful VC do not appear to spend as much effort in picking out winners and losers as they spend time getting start-ups into the market to let the customers pick winners. Taleb asserts that VC really don’t pay much attention to business plans (a comment on this is below) as they do to people. VC understand that the plan is unlikely to actually work as described, but good people will be able to adapt to reality and that is the best hope for success. Even with good people, most start-ups fail and those involved learn from the experience and try again (hormesis in action).
Consequences and barbell investment
Taleb proposes an approach to “investing” based on the fragility framework he lays out. Essentially he suggests a portfolio with two types of investment. The largest part of the portfolio is invested in the safest possible way (cash, US Treasuries, etc.) and a much smaller part invested in a set of very high payback, low probability situations (e.g., shorting Fannie Mae in 2006). He uses the image of a barbell; it is heavily weighted at both ends, but essentially nothing in the middle. In this respect, Taleb would refuse to invest in situations with medium risk and reward. He would only take risks with known loss potential; he might lose the entire investment, but no more. As is common with such strategies, most investments lose their value, but a few more than pay for the losses. But central to this approach is that Taleb is looking for fragile circumstances to bet against, but makes no predictions about how or when the fragility will be exposed.
Fragilization
Taleb has a lot to say about transfer of fragility. In many cases, he thinks that this transfer is either unethical or immoral. Banks that socialize their financial risk but privatize their rewards come in for specific comment. But the phenomena is more pervasive that this simple example. For example, government debt can be a source of fragility that transfers risk from one generation to another or from one group (government clients) to another (taxpayers). Many of the debates about regulation can have overtones of fragility transfer. For example, investing less in earthquake-resistant buildings today transfers the benefits of low cost to current builders and the risk of damage to future tenants. To prevent fragilization from this sort of transfer, Taleb thinks that it is vital to require the people who take these risks to have “skin in the game”. He takes his argument one step further. People who advise others on actions (like public policies) should have their skin in the game. They should be forced to invest their own money and image in the advice they provide. They will benefit from good advice and suffer from bad advice. In Taleb’s view, this would weed out many well-intentioned, but ill-informed providers of advice. Society would benefit from fewer unanticipated negative side effects and the disturbances they create. Taleb (ironically since he is advising here) suggests the following: Never ask anyone for their opinion, forecast, or recommendations. Just ask them what they have – or don’t have – in their portfolio.
Redundancy and anti-fragilityMany natural things are robust or anti-fragile. Many human systems have become robust and some anti-fragile. Common features of these systems are redundancy and optionality. For example, experience has taught that airplanes are safer when they have multiple redundant control systems. When one system fails, another can substitute. Planes have pilots and co-pilots for the same reason. The redundancy is inefficient, but the consequences of failure are catastrophic. In the drive for greater efficiency (and presumably lower cost), much redundancy has been removed from systems. In some cases, the consequences of decreased redundancy are trivial and the gains exceed the costs across time. But in some systems, the consequences of a redundancy decrease are unknowable because the systems are too complex.
An importance aspect of redundancy is optionality. Redundant systems are rarely exact copies of each other, so they offer slightly different approaches to getting something done. Greater dispersion in the options leads to a broader range of possible outcomes, so the choices that a person makes can result in quite different outcomes. To take a trivial example, it might be efficient to have a single type of shirt, say a white t-shirt. This is a great choice for warm, sunny summer days when you do not plan to change the oil in your car. But this shirt might be less optimal during December or when you wish to change the oil. Having the option of various types of light and heavy, short and long sleeved, light and dark shirts provides a dispersion of outcomes (warm and cold, easy or hard to clean) and provides choice to the wearer. As the environment of a system shows a greater range of conditions, the importance of redundancy and diversity becomes more important. Having the wrong short might be a matter of comfort in some temperatures, but at extreme cold and hot conditions it could be dangerous.
The important part of any system or population may not be the center, but the edge. It is the extremes of temperature that create risk – not the center. A lot of attention is paid to averages, but much less to the tails in the distribution. The whole book is really a discussion about extreme events and their consequence, but an important point is that these extreme events are not just sources of adverse events, but also the most extreme desirable events. No one at present dares to state the obvious: growth in society may not come from raising the average the Asian way, but from increasing the number of people in the “tails,” that small, very small number of risk takers crazy enough to have ideas of their own…
People desiring anti-fragility in a system should consider adding some redundancy and variety. The additions decrease efficiency, but increase the system’s ability to cope with stress and shock.This summary does not do justice to experience of reading this book. It is a wide ranging, almost stream of consciousness, monolog about variation, behavior, uncertainty and society. This book reads like a conversation with the author might sound; it is not always an organized narrative proceeding from point to point. The author complains about public figures’ behavior and is not shy about naming and shaming. He castigates whole groups of people that he thinks have harmed society. But when it is all boiled down, there is this one point. Be prepared for the best and worst outcomes – don’t bother trying to predict their arrival.
My comments and interpretation:
- One of the implied rebukes in this book relates to inappropriate applications of statistical modeling methods to circumstances where they do not fit. Statistics, as commonly taught, applies to a number of important circumstances under which certain conditions apply. Within those constraints, the methods are valid and useful. However, a person can apply the method to any set of numbers even when the conditions limiting their valid use do not apply. For example, using statistical methods to make projections of future cash flow from an innovative product is not a valid use of statistical methodology. I wonder if people even hope that application of a statistical modeling technique will overcome the fact that the model is driven by a set of assumptions about the future rather than a set of measurements of the present.
- The book makes a number of statements about the merits of betting on “facts” and “people”. In most areas of society, it is much better to bet on (or against) people. In choosing which projects/business plans to fund, bet on the people. Apparently, this is what venture capitalists do. Taleb’s special interest is sudden changes in fortune, whether crashes or windfalls. Many windfalls, like those of John Paulson or George Soros, came from betting against the crowd. When they saw people crowding into sub-prime mortgage bonds, they recognized foolish behavior and bet against the crowd. In many situations, it is hard to get good facts. Many efforts stall because good facts will never be available until somebody “creates” the facts. In an uncertain domain, maybe decision makers really need to focus on people.
- Taleb is adamant that many of the problems that we face from iatrogenics and black swans are due to agency. People, with a vested interest, can’t provide impartial information. He is fairly harsh about this in many cases, describing these people as amoral legalists. And there are circumstances that support that perspective. I wonder how many people are blind to the impact of their own interests. It is possible that the mental models of entire professions make professionals blind to the fragility of their situation and shakiness of their advice. For example, doctors are trained to treat people and people who pay a doctor expect treatment. Under this circumstance, it could be very difficult to evaluate a person and then do nothing. Many professions are essentially agencies; their entire purpose is to advocate for their employer. It reminds me that one of the classic cautions in considering information is “know your source”; what is their interest and how is it expressed? Can their bias be tested?
- In my last academic job, I was surprised to learn that my mentor never used statistics. He once said something to the effect that ‘if it wasn’t obviously the case, then it wasn’t the case’ and he described some rules-of-thumb from the leading edge of protein chemistry. Much of my previous work required statistics because the systems were complex and any hint of what was going on was valuable. But his real point was – design an experiment that would permit an unambiguous answer. Perhaps we need to spend more time thinking about what experiments to do and how to do them in order to get answers that don’t require incredibly complicated analysis to explain.
- There is an interesting juxtaposition between hormesis and evolutionary selection. Consider two stereotypes about the US and Europe. The US in entrepreneurial while Europe is not. People in the US have much lower job stability and a weaker safety net than Europeans. When recessions begin, some Americans lose their jobs. Companies rapidly shed people and go through a selection cycle. Some companies die and some do not. The individuals let go suffer a shock – but most are reemployed by new companies and there is a remixing of skills and ideas. Individually, they may have learned new skills and gotten tougher. The most recent American recovery might not be great but it is recovering. In Europe by contrast, companies are slow to respond to economic downturn and are constrained in releasing people. The strong safety net does not induce people back into the labor market as intensively. The difficulty of laying people off retards hiring and thus companies do not get quite the same infusion of new skills and ideas. European workers don’t build their individual skills as dynamically. Most European countries are struggling to end the most recent recession, much less recover. If you accept the stereotype and analysis, then you might expect Americans to experience more hormesis that Europeans and, over time, become more capable and dynamic.
- Taleb asserts that start-up business plans are not that important to VC, but I wonder if a good business plan is used as a screening tool. A non-specific plan indicates a lacking of focus or concentration. Excess optimism reveals a lack of practicality. One can imagine a number of ways that a plan could be used as a proxy in screening the people. I suspect that business plans are quite important, but not for their specific content.
- The book discusses the method of investing in large upside situations and provides a number of examples. It would be easy to deduce that the investor is figuring out which opportunity is the right one to bet on. Though much less emphasized, it is clear that the concept is to make many small bets rather than one big one. This is more similar to the concept of “little bets” or “probes of the future”, than a “bet the company” innovation. Because the core concept is that the future is unpredictable, predicting which bet will pay off is foolish. To receive a meaningful benefit, you must have many options.
- The earthquake in northern Japan revealed an example of system with low redundancy and optionality. Toyota’s tightly linked supply chain depended in some cases on a single parts supplier. While the supplier’s factory was not damaged, the loss of electricity disabled their production and stopped Toyota’s production. In a relatively short time, a work-around was developed, but the exceptional efficiency of the system was shown to have the side effect of fragility.
- Iatrogenics is not limited to medicine. If social support induces people to leave the work force, this iatrogenics. If tax breaks induce companies to invest in areas with low productivity work forces, this is iatrogenics too. Because people dislike complex theories, there is a tendency to simplify them. For people developing the theories, it can be difficult to get enough good data, so they simplify too. All this simplification leads to theories and models that have discarded all the reality that might reveal the side effects. Assumptions replace facts and intervention leads to “unexpected” problems. Taleb calls this “naïve interventionism” and thinks this is a major contributor to fragility. Each intervention, with its unforeseeable consequences, makes society more complex, more interdependent (less independent) and more vulnerable to shocks elsewhere. This is not an argument against all intervention, but a suggestion that the full complexity of the world be acknowledged and included in design. It also suggests that when the benefits of an intervention are small, then maybe the best option is to not intervene at all.
*text in italics is taken directly from the book
Great Summarisation & Interpretation.
Really enjoyed reading it.
Posted by: SACHIN | 01/15/2014 at 10:37 PM