Misbehaving
Richard Thaler
This book is primarily a personal account of the creation of the field of behavior economics. Thaler was an economics graduate student, when he noticed a number of regular human behaviors that were contrary to the assumed economic behavior of people. Conventional economics was based on the idea that ordinary people make decisions to optimize their own interests (acquire the best value for the available money). They did this within a system that achieves equilibrium (in other words, supply equals demand with price moving to bring supply and demand together). As such, economic is the most rigorous of the social sciences with a well-developed mathematics.
During this period of graduate study, he had occasion to meet Daniel Kahneman and Amos Tversky who had just published a paper of the behavior of loss aversion. Loss aversion is exactly the kind of decision making that economic theory says should not exist. This book tells the story of the struggle between the defenders of theory and their behavioral challengers.
The following chart is key to understanding this revolution in Thaler’s thinking. What this shows is that we feel much less benefit from gains than we do pain from losses. If we gain 1 unit in value, we may feel 2.5 units of utility. If we lose one of the same units of value, we may feel 9 units of lost utility. Utility is a broad term that includes all sort of things like money, happiness, ease, comfort, and so forth. What is more, the response of utility to gain/loss is not linear; we feel much more utility from the first gain of 0.1 than the last gain of 0.1. Economic theory says that both should cause the same utility.
This is easily observed is a variety of ways. Consider discounts on two items. Two stores are offering $100 off one item in their store. Assuming that you are considering both purchases, you will be much more influenced by $100 off a $500 stove than $100 off a $30,000 car. Though clearly the first is a relatively much bigger discount, economics observes that they are the identical benefit. It should be irrelevant what the purchase price is – the discounts are the same.
The facilitate discussion, Thaler and his collaborators created the idea of a world populated by “Econs” and “Humans”. The Econs acted according to economic theory and Humans acted like humans. In reality, people display both Econ and Human behavior (according to the behavioral economists). The real problem is that almost all government policies from about 1950 until 2000 were based on the idea that all citizens are Econs. If people did not behave like Econs, many government policies could fail to achieve their goals and might even drive the opposite outcomes.
This summary will not cover the whole realm of behavioral economics or all of Thaler’s discussion. The book is very well written and an interesting, recounting how paradigms change over time. You should read the book. Instead, the summary will focus on bits of information that caught my attention and will include both minor and major ideas. In contrast to most summaries that I write, this one will have some commentary and interpretation written right into the main text (though I will have a final section of additional commentary).
As mentioned earlier, economic theory is a logically rigorous approach to how we make decisions about value. When applied to real systems, the situation is never as simple as the theory imagines. In physical sciences, experiments are then run by attempting to fix all of those conditions to be constant, or by randomizing conditions to spread the noise evenly, and maybe by ignoring the rest of the system. Economic theory chose the latter by asserting that many factors would be ignored by an Econ as irrelevant. These factors were called by Thaler “Supposedly Insignificant Factors” or SIF. This is a fine put-down of troublesome realities that disrupt application of the theory. Theory says that we should ignore certain factors, and if we do not, then we should be taught to ignore those factors by economists.
Yet psychology recognized the proportional discount phenomena described above as the Weber-Fechner Law – the magnitude of a difference is proportional to the size of the base – in other words we think proportionally. We would drive 20 miles to save 20% ($100 of $500) but we would not drive far to save 1.6% (500/30,000) even though it is a greater savings. This psychology could make a better prediction of human decision making than economics. But the Weber-Fechner law makes note of the Just Noticeable Difference. The book offers this example from the show Car Talk. A women called in to say that she had recently had her two front lights fail at the same time – what was going on? The explanation was that her lights had not failed at the same time. She had probably been driving around with one light for a while, and losing one light was not enough to notice. Losing the second light made the absence noticeable. In this case, a 50% change was not noticeable.
Opportunity cost
Suppose you have a ticket to some event that could be sold for $1000. It does not matter what was paid for the ticket, what matters is what you could do with the ticket. If attending is worth at least $1000 to you - that is what you should do. But if you had the $1000 instead of the ticket, would you buy the ticket or buy something else (new clothes, some furniture, repay some debt)? Selling the ticket gives you the opportunity to have some new clothes or furniture. Having the ticket gives you an opportunity to see a game or performer. You must choose one over the other. In this sense, the only purpose of money is the opportunity that it affords you.
Endowment effect
We tend to value the things that we have more than things we don’t have. So if I have a mug worth 5$ and somebody offers me $5 for it, I will probably decline the offer and keep the mug. And the same is true, if I have the $5 (I keep the money). This same effect will prevent me from selling a stock that has made a profit. It will make me hold onto things in my garage that I will probably never use, when I could have a garage sale and receive some money which I can use for something.
What we get when we buy stuff
When we go to buy stuff, we should only buy things where the utility of the thing is greater than the money we pay. When it is not, we either don’t buy (the econ choice) or buy and feel cheated (the Human experience). People have some interesting thoughts and behaviors in this regard. Suppose you and your friends are on vacation at the beach. It is sunny and warm which makes you thirsty. You’d like a beer. Nearby, is a shack selling beer for $5 per bottle. You know back in town that same beer is $2 per bottle. Presented this scenario, people were asked if they would buy the beer. Most say no. When the same basic scenario is presented to another group of people except the shack is replaced with a luxury resort, the majority of people will buy the beer. This make no sense; the beer is the same in both cases and the price (and overprice) is the same. The enjoyment of drinking the beer ought to be the same. For an Econ, the decision would be to reject the overpriced beer in both cases because the Econ only cares about the acquisition utility of the beer. In other words, the value is in drinking the beer. A Human gets that value, but also experiences a transaction utility. This is the value from the transaction itself. So these humans got $3 of value from buying from the resort but not from the shack. Humans will pay for the experience of spending money, while Econs won’t. To a great extent, this is part of why some retail businesses always have sales or coupons for items. Think of Kohl’s where many items are always 20 or 30% off. By anchoring the price at some reference, and then offering a discount, they create potential transaction value for the buyer. Coupons and rebates do something similar when people choose to go to the store to use the coupon.
Buckets of money
Buckets sounds neater than budgets but it the same concept. People partition their money into different buckets and then impose discipline around what that money is spent on. Sometimes this is explicit (for example are households that literally use envelopes to keep the different budgets) and sometimes purely mental. A group of students were asked if they would like to buy a ticket to a play the following weekend for $50. Some of the students were told that they had attended a game earlier in the week (ticket = $50) or had received a parking ticket (fine = $50). Significantly more students who had received the parking ticket elected to buy the tickets to the play – presumably because parking and theater tickets are in different mental buckets.
These mental buckets can be very useful and powerful tools for protecting resources like retirement money or mortgage payments. They can also lead to odd behavior. In the US, most gas stations offer regular and premium gas, with a higher price for premium gas. There is almost no practical reason to buy premium gas. During 2008, the price of gas in the US dropped nearly 50%. Two economists were able to get data from a club store chain where purchases were tracked by individual. You can imagine that a household that was spending $200 per month of gasoline at the beginning of the year, suddenly had an extra $100/month late in the year. What did they do with the money? Some people responded to low prices by driving more and they spent an extra $5-10 per month buying regular gas. The shoppers could spend their money on anything, but only 3 things seems to experience an increase in spending: milk, orange juice and premium gasoline. After the price fall, premium gas sales increased far beyond what would have been predicted. Apparently, people had allocated $200 to the gas bucket and noticing an excess in the bucket bought some premium gas “as a treat”. This was transactional utility because there was no functional utility.
Mental buckets can also lead to serious harm when important buckets get reframed. In the late 1990s, some companies began to persuade people that the accumulating equity in their homes was a potential source of cash. Since house prices “only went up”, they should borrow the equity for other spending. In other words, they created a new bucket of value and asserted that spending money from that bucket was safe. When house prices fell, many people had their equity cushion erased and could not repay the equity loans. They lost their houses, which stood as collateral. The mental bucket “house”, where people could not take money out of, was a semi-safe store of value. Making a new bucket call equity created a potential risk.
How do you control yourself?
Mental buckets can serve a planning function, but they also serve a self-control function. By putting money into a specific bucket, you can protect it from foolish spending (unless it is the foolish spending bucket). Self-control was one of the anomalies that first got Thaler interested in behavior. He noticed that if you had a bowl of nuts set out as an appetizer, people would eat some and then put them away before the diner arrived. They did not want to eat too much and spoil their appetite for dinner. But they did not trust themselves to stop eating if the nuts were in sight.
Self-control in this context relates to the value that you assign to present utility versus the value of future utility. Is the pleasure of eating the nuts right now greater than the pleasure I will have from my dinner in 15 minutes? But this can encompass many possible pairs. Is the enjoyment of eating the cake right now greater than the enjoyment I will have fitting into my pants next month is an example of different values being compared.
This creates a notion that we have a current mind and a future mind; we are trying to decide what decision our future mind would prefer our current mind to make. Our minds must resolve this conflict. But this is insufficient because we have a current planning mind that is focused on the future mind’s perspective and a current just-do-it mind. In economics, the future value of something is usually discounted in order to offer a comparison, where the discount reflects the “cost” of waiting. It seems that people use some kind of hyperbolic discounting of the future. The book’s example (metaphor) is a guy that is camping alone whose food is eaten by a bear. He is left with 10 nutrition bars, which is coincidentally the number of days before a plane is supposed to pick him up. Assuming that the guy can’t effectively forage, how should he allocate the 10 bars? The planner (Econ) will assert that each day is a future person and each will value a bar equally (assuming hunger is a constant) and place each bar in a mental bucket. The planner would be even happier if there were 10 time locked safes with a bar allocated to each safe to prevent cheating. The current/Human mind would wonder how many bars it takes to satisfy its hunger and eat three bars today. Two days later and there is one bar to last the remaining 6 days. Because most camping trips don’t involve any time-locked safes, the only tool available to the planner mind is guilt. Guilt decreases pleasure so we avoid it. One of the common tools of economists is “as if”. You probably do not have 2 or more actual minds, but we can’t think of this “as if” it was true.
Few people have perfect self-control, and almost everybody knows that they could improve self-control. Converting a mental bucket into some kind of “safe” helps people act on behalf of their future. Think of the 401k (or other similar schemes) as a safe to enhance self-control.
Fairness
The idea of transactional value has many applications, but one possible application may be in the area of fairness. Imagine a store that has been out of a coveted toy for a month before Christmas, and suddenly discovers one hidden in a corner the week before Christmas. The store manager announces that he will hold an auction with the toy to the highest bidder. Fair or not fair? About 75% of people think unfair. But if the manager announces that proceeds will go to charity, then about 80% think this is fair. Here the context of what will happen to the money, changes whether people find the situation fair – and probably guides whether they will participate or not. It may also cause the price to go higher. From an Econ’s perspective, who gets the money is unimportant and should not change anybody’s behavior.
In another example, a car dealer can’t keep up with the demand for a popular car model. There is a waiting list and the dealer raises the price $200. Fair or unfair? 71% say unfair. But another dealer faces the same issue. This dealer typically sells cars $200 under list price and announces that they will charge list price until the shortage is over. 58% of people say this is fair. This is the same $200 increase for both sets of customers, but a different perception of fairness.
Fairness might seem like an abstract concept – after all, who said life is fair? However, there is plenty of evidence that when people perceive unfair behavior, they punish it.
Managerial decisions
One of the routine tasks that manager do involves allocating resources like time or money. They often do this by means of planning where people are asked how much time/money a task will take. For completely routine tasks that are done constantly, the estimates are fine and the plans work. But for tasks that are less routine, the estimates are typically way too optimistic. At least one issue is that the estimators use something called narrow framing – they look at the task specifically and without awareness of other circumstances. They forget that other people will have conflicts, holidays will intervene and on and on. This narrow framing of the problem is a consequence of their insider view which prevents them from seeing the bigger picture. Somebody who is uninvolved will look at the plan and see these problems more clearly because they have an outsider view. The best broad framing come when you can use base rates to plan. The frequent tasks have good base rates. For many managers, some tasks occur so infrequently or seem so unique, that it is hard to develop a sense of base rate. Consequently, planning does not really work.
When the desired outcome of a task or project is uncertain, the planning problem becomes harder. This is an issue in projects related to innovation, organizational change or strategy. Even the best plans do not guarantee the desired outcome. But most such decisions that involve uncertainty involve potential “losses”. At a minimum, the present manager will experience costs but the benefits (if any) will accrue to a future manager (maybe themselves). This is like the self-control concept of a current and future mind. What decision would our future mind want our current mind to make? Because we fear loss much more than we embrace gain, we may be biased to think our future self will wish we had not made the change. If you consider the observations that most innovations and cultural change efforts are said to “fail”, then the current mind will deduce that doing nothing is more likely to be satisfactory to the future mind. So, you would expect that managers would always choose the status quo over change – and most employees think their managers are risk averse consistent with this expectation. And yet managers do choose change. Even more interesting, the incentives for success and failure is quite asymmetric. If a manager makes a decision that works out, they are likely to gain a modest reward, perhaps a promotion or a bonus. If the decision does not work out, they could lose their job and reputation for competence.
Thaler offers an example from a company that he consulted with. The company had 23 divisions and he ask their senior managers the following. Suppose you could invest in a project that had a 50% chance of delivering $2MM and a 50% chance of losing $1MM (expected value = $0.5MM). Twenty of twenty-three managers rejected the opportunity. The CEO would have taken all 23 because the company was almost certain to be ahead overall, even if some divisions lost money. But individual managers did not think about the global effect – only their local consequences. If individual managers were Econs, they would make the investment, but the Humans in that role would not. This makes the question of incentives very interesting. What incentives help the manager think about the benefits and costs in the right balance, and the big and small picture, when the decision must be made with current information?
The key, and problem, is that the incentives for decisions must be given based on the use of information available at the time of the decision. In other words, if the only incentives (rewards, and especially punishments) are based on the results, then loss aversion will predominate. The problem of hindsight bias affects everyone. Five years from now, the boss will forget that they approved this failure and said it was a good idea at that time too. The boss may also remember that they always believed in it when it is successful, when they had panned the investment at the time. One of the learnings from behavioral economics is that when “agents” make bad decisions, it is because principles had created a “bad’ environment. In the example where 20 of23 managers rejected the investment that the CEO would accept, he had created the wrong environment.
The combination of the last two paragraphs illustrates a contrast between Econs and Humans. A famous economist offered the definition of a coward as a person who would not take either side of a 2-to-1 bet. He turned to a colleague and offered him a bet as proof. They would flip a coin and if the flip was called right, he would receive $200, but if he called wrong he would pay $100. The bet was declined because a $100 loss was bigger than a $200 gain. But he went on to say that he would take 100 of those bets. The famous economist was Paul Samuelson and this response surprised him as being inconsistent with economic theory as lived by an Econ. The logic helps explain some of the difference between these two mindsets. The logic goes like this. Suppose you had done 99 coin flips and had a chance to decline the last flip – would you take it? The same logic that leads you to reject a single coin flip should make you reject the last coin flip. If you keep asking one-by-one if you would stop flipping the coins after some number of flips, you eventually get back to the first coin flip. Taken one at a time, you don’t flip coins. To quote Samuelson, “If it does not pay to do an act once, it will not pay to do it twice, thrice,…or at all.” Ordinary people will reject this logic. The offer of one isolated bet is different from the offer of 100 bets; the concept of 100 isolated bets is not the same as a group of 100 bets. The problem for managers is that they are rarely presented with an option to make 100 bets; they get one at a time. One at a time, managers are more subject to loss aversion. In other words, by framing the investment decision narrowly (one decision), the manager is more likely to be influenced by loss aversion.
The book basically described the problem, but suggests that no solution is known yet. It is complicated because some of the potential losses are not interchangeable. Being responsible for a failure exacts an emotional cost that money does not exactly sooth.
Society
If economic theory was just the province of economists, the limitations of classic economics to predict behavior would be curious. However, classical theory has become a dominant force in government policy and politics (which are slightly different). For example, there is one general political philosophy based on the idea that the “market” consists of rational actors that work in their own self-interest and would be guided by Adam Smith’s invisible hand. The assumption associated with this is that everybody has the same information and skills to understand the information. If you believe that this is correct, then economic theory should be used to create policies that basically leave others to sort out their choices (this is a relatively libertarian perspective). Alternately, if you believe that information is not equally available and that people lack the needed skills, you opt for a more paternalistic view. Policies devised by experts may be more effective. Economics as practiced Econs is highly rational, self-interested at every time scale and people have great willpower. Behavior economics has shown that people can be irrational, have problems with self-control and don’t always act in their own best interests. The contest between these two views has influenced politics and policy for the last few decades. Generally, the contest has been framed and won by the perspective of Econs. Thaler asks the question, “If people are not Econs, do “Econ” policies cause big problems?” Examples of non-Econ behavior include insufficient retirement savings (assuming that people want to retain an equivalent lifestyle after retirement), insufficient exercise and fiber consumption (assuming that a long and healthy life has utility), and withdrawal of home equity to support current consumption.
Thaler frequently mentions that there are many cases where classical economic theory is highly effective as a guide to policy, but that it is easily proven that everybody experiences Human moments. Whether in government or non-governmental organizations setting policy, there needs to be an understanding of how Humans act in order to nudge people in the right direction. In the libertarian-paternalistic frame, policy needs to be slightly paternalistic to nudge us towards the preferred economic outcome. Policy should nudge people to choose greater utility, but it should not force them to make the “right” decision.
Big deals
This book has a subtext that deals with scale. In brief, the vast majority of decisions that we make are small. They have small consequences – good and bad. In our lives, we make few “big” decisions. It is more likely that the accumulation of our small decisions has more impact than any of our big decisions. It is possible to learn to understand and make “better” decisions. But it may be best to think about how to make small nudges to our decisions because the effect of small decisions adds up over time. Though the book mentions the following in general, I will make it a bit more concrete. People know that if they eat too much (more than they use through activity), they will gain weight. In fact, there have been longitudinal studies that show that adults gain about 0.8 pounds per year over the 20 year study period (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3151731/ ). Work out the math of this assuming the calorie in/calorie out concept and the average daily over-consumption is about 7.5 Calories (or almost 0.4% of daily intake). This tiny imbalance, over time, has a big effect. Saving for retirement has a similar dynamic involving modest amounts of investment over a long time. Some issues that develop in companies, non-profits and society are the result of small decisions made long ago. These might be hard to uncover, because they are small. But the reward in thinking small like this may be that the required change is not dramatic. The retirement problem is actually fairly easy to solve. In contrast, the weight gain problem is quite difficult to solve because our physiology is not sensitive enough to detect a 1% deviation in calorie uptake (it may not be able to detect a 20% deviation).
Too often, leaders feel the need to promote a change by suggesting that the benefits of the change will be immediate and easily felt (costs will not be felt much). The most important changes may be just the opposite. The immediate effect of the change might seem quite onerous and the benefits imperceptible. An Econ would have no problem maintaining their discipline to allow the benefit to accrue. Humans don’t tend to be that dedicated.
Comment and interpretation:
- It isn’t hard to avoid an explicit discussion of what people feel is at risk from a bad decision. Nobody wants to dwell on that consciously. The book spent very little time on cognitive dissonance, but I wonder if this is how managers ever choose to take a risk. This book made me wonder about this. It is easy to take a risky decision, if you won’t need to explain the problems later. In organizations where managers change roles every 3-4 years, it is easy to make a decision midway through your tenure confident that you will be somewhere else when results need explaining. You may not get the rewards of a good outcome, but you avoid the risk. How many other big decisions are made by managers who know they are moving to new roles?
- There is an interesting tension connected to the concept of accountability. We instinctively want people to be accountable for the consequences of their decisions. In a business context, we think that it is unfair for somebody to make a decision that does not work out which causes others to lose their jobs, but the decision maker does not. We want the decision maker to face the consequences. Accountability, in this context, means facing the same consequences. But this increases loss aversion, which makes it more likely that managers won’t make decisions until they are forced to, which may be too late. The opportunity cost of failing to innovate or change can be greater than the actual cost of maintaining the status quo.
- A lot of resistance to behavioral studies is based on the ideas that most experiments examine the behavior of a single person making a decision. If those people were put into a market situation, then would make the decisions theory says they should. Thaler recounts how Amos Tversky rebutted such an argument, by pointing out that economist’s wives and students and acquaintances provide economists with many examples of bad decision making, yet economic models predict that these same people will make the right decisions. The economist replied that the “invisible hand of the market” would teach them the right behavior. Thaler calls this the invisible handwave argument. Why hadn’t the market already taught the wives, student, etc. the appropriate approaches? The reason that I included this anecdote was not about Adam Smith’s metaphor but because of the “invisible handwave”. The handwave is widely applied in many complex subjects. A specific problem arises for which a common solution is suggested. When someone observes that the solution has a significant defect, the response is often some version of, “You don’t understand, [insert invisible handwave generalization]” The appeal to the generalization is supposed to justify the misapplication of the generalization. People fall for this all the time. So, add this to your mental models as a way to diagnose something which has not been thought through. If they can’t give the specific justification, there may not be one.
- Mental accounting is an interesting topic because economic theory says that it should not exist. For example, many people create two (or more) savings accounts. One is for general expenses and the other is for a special expense (vacation, car, etc.). Money in the special account won’t be used for anything else. To an Econ, money is money and having two accounts is inefficient. To a Human, this two accounts represent entirely different things. To a great extent, this is the whole point of budgeting – to create a number of mental buckets for different kinds of expenses. What turns out to be interesting is that people will use different logic to make decisions about different buckets.
- While I was reading this book, another book became available: The Undoing Project by Michael Lewis. This book examined the partnership between Danial Kahneman and Amos Tversky who laid much of the psychological foundations for behavioral economics by showing that people were neither natural statisticians nor unemotional decision makers. The Lewis book really did not discuss their psychological insights as much as their partnership. I decided that it would be almost impossible to summarize that book, but that I could comment on it from the perspective of partnership. The work jointly carried out by Tversky and Kahneman was revolutionary, but it is fairly clear that neither man would have created this independently because of their differing interests. Tversky was an optimist with strength in mathematics. Kahneman was a pessimist with strength in observation. They each became established before meeting and their first meeting involved a seminar given by Tversky that Kahneman critiqued severely. For Tversky it was one of the first times he had ever been found lacking and he reveled in the mind that could find these flaws. Kahneman was a perfectionist by orientation and Tversky the generalist. Kahneman wanted to stick to the world of psychology and Tversky wanted to apply their work broadly. In many ways they were opposites in personality. But they were united by an interest in the flaws in human decision making. They studied their own reactions to certain decisions, noted their own mistakes and created studies to show whether other people made the same mistakes. My interest in reading the book was in understanding the partnership they forged – what made it special and what broke it up eventually? For what created it – the key item seems to be their common interest in predictable (in the probability sense) error making. Because they were such different people, this common interest used their divergent perspectives to create a sense of “important progress”. They could feel that they were doing something important that used their full abilities. This created a powerful feedback loop to overcome their personality differences. But once they had worked through their common interest, they struggle to find meaning in working together. They had different ideas about what to do next. The absence of a common goal let the contradictions in their partnership to dominate. Lessons for us include the idea that finding the right person and finding the right work are probably equally important, with the right work probably more important. A second lesson is that such partnerships have a limited life because interests diverge over time. If we see such collaborations as transient (even if they last years), we will find their end less painful. In the business world, almost every organization seeks better collaboration between people. This books makes clear that a collaboration of this type is complex, unpredictable and a force of nature itself. It is very exclusive and opaque. The vast majority of their work was done by the two of them while absolutely alone. They would go into a room and shut the door. They rarely invited people into their work and usually only one of the pair would talk to that person. They could not (or would not) explain what they did together – they just showed the work. Businesses rarely allow only two people to work together, to exclude others, or to redirect their own work. It is clearly the case that Kahneman and Tversky were special individuals. Tversky thought Kahneman was the greatest living psychologist. The joke went around that there was a new intelligence test available. If you were in the same room as Tversky, the faster you realized that Tversky was the smartest person in the room, the smarter you were. But what made them a pair for all time was their submersion in the partnership around a topic to the point that they could not say who had done what. The primary thesis of “Misbehaving” is that Econs are poor models of real humans when it comes to economics. Kahneman and Tversky showed that it was not just economics where humans acted less than logically.
*text in italics is directly quoted from the book
Recent Comments