Don’t believe everything you think
Thomas E. Kida
Don't believe everything you think
This is a book about being skeptical in the classic sense. A skeptic in this sense is not the cynical rejectionist, but someone who is comfortable with the idea that there is not yet enough good evidence to believe that something is true – it is the middle ground between strong belief and disbelief. One of the wonderful features of the book is that it lays out its case in the introduction and then elaborates over the remainder of the book (I did not finish the book).
In brief we make six kinds of mistakes in our thinking. Becoming aware of these mistakes is a kind of immunization against these mistakes. The book does point out that making a good decision does not guarantee a good outcome. A 90% chance of success includes a 10% chance of failure. The six mistakes are:
- We prefer stories to statistics*
- We seek to confirm
- We rarely appreciate the role of chance and coincidence in life
- We can misperceive our world
- We oversimplify
- We have faulty memories
We prefer stories to statistics
Stories trigger our emotions and gain immediacy. This emotional reaction is both stronger and less conscious than our reaction to statistics. This would be a relatively minor issue if people had a comfort with numerical and statistical thinking. Many people dislike math-like stuff. This does not help. Even those who are moderately comfortable may not be comfortable with the meaning of statistics. Dwight Eisenhower was reportedly dismayed to find out that half of all school children were below average. We are also influenced by our own experiences. Doctors who treat people with lung disease are more likely to quit smoking than doctors in general practice.
Increasingly, data are available to help inform decisions and statistical methods are applied to the data to guide interpretation. But if a friend or family member has an experience that contradicts the data, we favor their story over the data. The same occurs in business.
We seek to confirm
We often form opinions on topics very early and then notice facts and stories that confirm our opinion. We may not even be conscious of the nature of this opinion during this process, so be less aware of the skewed data collection. This is one example of priming, where we notice things that correspond to thoughts that we already are having. It also is part of our drive to demonstrate that we are right about the way the world works.
We rarely appreciate the role of chance and coincidence in life
Lots of events are driven by chance, but people are more comfortable with a mechanistic view of things. Everything happens for a reason. If something good or bad happens, we look at events immediately preceding the event and attempt to find the causes. We then repeat or avoid the “cause”, depending on the result. This is the basis for superstition. It is also the basis for advertising related to mutual funds (e.g., beats the average fund for the last 3 years which is not that unlikely an event if results are random). This is a side effect of the very beneficial ability to find patterns in events and make predictions about the future.
We can misperceive our world
We strongly believe that we “know what we saw”, but we often miss things that are right in front of our eyes. We may be inattentive, distracted, or misunderstanding of what we see or hear. We may see/hear what we want/expect to see/hear and fail to notice what we do not want/expect to find. At the extreme, we have good imaginations and those imaginations help us notice things that do not exist.
We oversimplify
Faced with a huge amount of information almost constantly, we must simplify to make choices and act. We simplify using a number of approaches including ignoring information and substituting simpler questions for more complex questions. Simplifying strategies may not give the best decision, they are often “good enough.” But simplifying strategies may actually lead us to bad decisions in some circumstances. This is an area where our intuitive statistical thinking can lead us badly astray.
We have faulty memories
We do not have reliable, detailed memories, though we think that we do. In fact, we are completely convinced of the reliability of those memories that we do have. We selectively notice and remember events, we create ad hoc memories to supply missing details, we substitute assumptions for memories and we are subject to suggestion. The book observes memory is constructive which means that we do not replay our memories – we construct them as we need them. The point is that we risk making bad decisions when we treat our memories as facts.
With this summary complete, the author turned his attention to pseudo-science (ESP, alien abduction, etc.) to illustrate these thinking mistakes. I decided that a summary of this was uninteresting, but there were numerous interesting quotes which I will include.
- It is not belief that is dangerous to our society, it is belief. – George Bernard Shaw.
- Man is a credulous animal, and must believe something; in the absence of good grounds for belief, he will be satisfied with bad ones. - Bertrand Russell
- Keeping an open mind is a virtue, but not so pen that your brains fall out. - Bertrand Russell & James Oberg
- It is the mark of an educated mind to be able to entertain a thought without accepting it. – Mark Twain
- Million to one odds happen eight times a day in New York. – Penn Jillette
- It is likely that unlikely things should happen. – Aristotle
- The eye sees only what the mind is prepared to comprehend. - Henri-Louis Bergson
- You can’t depend on your eyes when your imagination is out of focus. – Mark Twain
- Prediction is very difficult, especially if it is about the future. – various
- Some people think of the glass as half full. Some people think of the glass as half empty. I think of the glass as too big. – George Carlin
- I have a photographic memory, but once in a while I forget to take off the lens cap. – Milton Berle
- I never know how much of what I say is true. – Bette Midler
- Our knowledge can only be finite, while our ignorance must necessarily be infinite, _ Karl Popper
- The fundamental cause of trouble in the world today is that the stupid are cocksure while the intelligent are full of doubt. – Bertrand Russell
Comment and interpretation
- Ask any adult whether they are below average in their intelligence, looks or professional ability and you will be unlikely to hear that they are below average. Numerous studies suggest that we almost all think that we are above average. A statistical perspective would insist that most people are average – or more specifically, they can’t be distinguished from average with much confidence. People are an amazing array of skills, perspectives, intuitive talents and experience so it actually very likely that in some respect, they are correct. They are above average at X. This is an important insight into any number of decisions that we make. Nobody exactly fits “their” stereotype because of their unique attribute. Exceptional performance in one area does not predict uniformly exceptional performance.
- Intuitive statistics seem to be a mash up of “common sense” and statistical terms without any actual statistical methods. Books present various puzzles related to false positives and negatives. For example, you are ill and a test comes back with a positive identification of a virus. “The test is 100 percent accurate in indicating a person has the virus when they actually have it, but it also says a person has the virus when they don’t have it 5 percent of the time.” You also learn that one in five hundred people have the virus. The first fact could cause you to panic, but the second should provide relief. The combination of these facts mathematically (conditional or Baysian) indicates that you have a 4% chance of having the virus. This is the sort of outcome that we are poor at intuiting, but comes up frequently. In fact, this illustrates another form of misperception. We tend to treat each decision as unique rather than as one member of a family of similar decisions. We calculate the probability of new product success, but do not take into account the overall probability of new product success. Suppose you calculate the probability of success for a particular project to be 80%. How does it change your perspective to know that the historical probability of success for similar projects is 5%? I’m sure that most innovators do not want to know the answer.
*text in italics is directly quoted from the book
Recent Comments