The Great Mental Models: General Thinking Concepts, Shane Parrish (8/10)
A short introduction to 9 popular mental models and 3 supporting ideas, designed to improve our ability to understand the world and look at situations through different lenses

Rating: 8/10
Read More on Goodreads
đ¤ Pre Read Exercise: What Do I Know About This Topic/Book?
Iâve listened to Shane Parrishâs podcast and followed his blog for years. He typically interviews subject matter experts about their mental models or writes about his own mental models. As a result, I'm familiar with the author's thinking going into the book, and know the broad theory behind mental models from previous reading too.
đ The Book in 3 Sentence
- This book is a short introduction to 9 popular mental models and 3 supporting ideas.
- The mental models presented are general thinking concepts, designed to improve our ability to understand the world and look at situations through different lenses.
- The book consistently draws on interesting real-world examples in order to bring theoretical concepts to life and help the reader apply these concepts to their own lives.
đ¨ Impressions
The book is very short and concise. As a result, the mental models presented were quite basic and the insights werenât particularly profound. Moreover, I already had an overarching understanding of each of the models presented in the book, and so I didnât find it especially valuable.
Nevertheless, I found the book was interesting and engaging. The real-life examples made it fun, and the illustrations were helpful too. It felt more like a story book than a non-fiction on mental models. Very light and easy.
Iâd have loved to see more drawing and visualisations, and for them to be more creative⌠like the ones we see in Eric Jorgensenâs Almanack of Naval Ravikant.
𼰠Who Would Like It?
Itâs a short and basic book so I wouldnât have any trouble recommending this to most people. Particularly considering the vast number of historical examples it draws upon - itâs basically 50% a history book - and thereâs something for everyone.
Itâs definitely a must-read if youâre into universal multi-disciplinary thinking. And Iâd recommend it to anyone questioning their decision making process or looking to understand more about how we think and how we can improve our thinking. Self-improvement junkies and people in business and leadership roles in particular would enjoy this book.
âď¸ How the Book Changed Me
Itâs helped me bring mental models I was aware of but had neglected back to the front of my mind. Itâs also reminded me of the importance of mental models and how we should not only be more conscious of our decisions but also how we can track and improve our decision making processes going forward.
đŹ My Favourite Quotes
Thinking better isnât about being a genius. It is about the processes we use to uncover reality and the choices we make once we do.
âA man who has committed a mistake and doesnât correct it, is committing another mistake.â
âConfucius
To the man with only a hammer, everything starts looking like a nail.
Learn from the mistakes of others. You canât live long enough to make them all yourself.
Trend is not destiny. Even if we can derive and understand certain laws of human biological nature, the trends of history itself are dependent on conditions, and conditions change.
Creativity is intelligence having fun.
âThe test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function. One should, for example, be able to see that things are hopeless yet be determined to make them otherwise.â
âF. Scott Fitzgerald
đ Summary + Notes
The key to understanding the world is to build a latticework of mental models.
Mental models are chunks of knowledge from different disciplines that can be simplified and applied to better understand the world. They help identify what information is relevant in any given situation, and the most reasonable parameters to work in.
âYou only think you know, as a matter of fact. And most of your actions are based on incomplete knowledge and you really donât know what it is all about, or what the purpose of the world is, or know a great deal of other things. It is possible to live and not know.â
âRichard Feynman
Acquiring Wisdom
In life and business, the person with the fewest blind spots wins. Removing blind spots means we see, interact with, and move closer to understanding reality.
This book is about avoiding problems. This often comes down to understanding a problem accurately and seeing the secondary and subsequent consequences of any proposed action.
âI donât want to be a great problem solver. I want to avoid problemsâprevent them from happening and doing it right from the beginning.â
âPeter Bevelin
Thinking better isnât about being a genius. Itâs about the processes we use to uncover reality and the choices we make once we do.
Mental models describe the way the world works. They shape how we think, how we understand, and how we form beliefs. Largely subconscious, mental models operate below the surface. They are how we think, reason, infer causality, match patterns, and draw analogies.
A mental model is a representation. We canât keep all of the details of the world in our brains, so we use models to simplify the complex into understandable and organizable chunks.
The models presented in this book are general thinking concepts. They will improve your understanding of the world and improve your ability to look at a situation through different lenses, each of which reveals a different layer.
The ability to draw on a repertoire of mental models can help us minimize risk by understanding what forces are at play. Consequences donât have to be a mystery.
Not having the ability to shift perspective by applying knowledge from multiple disciplines makes us vulnerable. Mistakes can become catastrophes whose effects keep compounding, creating stress and limiting our choices.
Keeping your feet on the ground
When understanding is separated from reality, we lose our powers. Understanding must constantly be tested against reality and updated accordingly. This isnât a box we can tick, a task with a definite beginning and end, but a continuous process.
The only way youâll know the extent to which you understand reality is to put your ideas and understanding into action. If you donât test your ideas against the real world, how can you be sure you understand?
Getting in our own way
The biggest barrier to learning is ourselves. Itâs hard to understand a system that weâre part of because we have blind spots, where we canât see what we arenât looking for, and donât notice what we donât notice.
Our failures to update from interacting with reality spring primarily from three things:
- Not having the right perspective
- Ego-induced denial
- Distance from the consequences of our decisions
The limits of our perception: We must be open to other perspectives if we want to understand the results of our actions. We rarely have all the information despite feeling that we do.
Ego: We tend to have too much invested in our opinions of ourselves to see the worldâs feedbackâthe feedback we require in order to update our beliefs about reality. This creates a profound ignorance. There are two notable reasons for this. First, weâre so afraid about what others say that we fail to publicly test our ideas and subject them to criticism. Second, if we do test our ideas and they are criticized, our ego steps in to protect us. We defend instead of upgrade our ideas.
Distance: The further we are from the results of our decisions, the easier it is to keep our current views rather than update them.
âA man who has committed a mistake and doesnât correct it, is committing another mistake.â
âConfucius
We tend to undervalue elementary ideas and overvalue complicated ones. The Great Mental Models draw on elementary principles, ideas from multiple disciplines that form a time-tested foundation.
âMost geniusesâespecially those who lead othersâprosper not by deconstructing intricate complexities but by exploiting unrecognized simplicities.â
âAndy Benoit
Understanding is not enough
Understanding is only useful when we adjust our behavior and actions. The Great Models are actionable insights that can be used to effect positive change in your life.
In the real world you will either understand and adapt to find success or you will fail
We must understand when ego serves or hinders us. Wrapping ego up in outcomes instead of in ourselves makes it easier to update our views.
We optimize for short-term ego protection over long-term happiness. Increasingly, our understanding of things becomes black and white rather than shades of grey. When things happen in accord with our view of the world we naturally think they are good for us and others. When they conflict with our views, they are wrong and bad. But the world is smarter than we are and it will teach us all we need to know if weâre open to its feedback.
But all models are flawed in some way. Some are reliable in some situations but useless in others. Some are too limited in their scope. Others havenât been tested and challenged. Some are wrong.
For every situation, we must determine which models are reliable and useful. We must also discard or update the unreliable or flawed ones, because these come with a cost.
The power of acquiring new models
While we want accurate models, we also want a wide variety of models to uncover whatâs really happening. The key is variety.
An engineer often thinks in systems. A psychologist thinks in incentives. A business person might think in opportunity cost and risk-reward. Through their disciplines, these people see part of the situation. None of them see the entire situation unless they think in a multidisciplinary way. They have blind spots. Big blind spots.
âTo the man with only a hammer, everything starts looking like a nail.â
What Can the Three Buckets of Knowledge Teach Us About History?
âEvery statistician knows that a large, relevant sample size is their best friend. What are the three largest, most relevant sample sizes for identifying universal principles? Bucket one is inorganic systems, which are 13.7 billion years in size. Itâs all the laws of math and physics, the entire physical universe. Bucket two is organic systems, 3.5 billion years of biology on Earth. Bucket three is human history, you can pick your own number, I picked 20,000 years of recorded human behavior. Those are the three largest sample sizes we can access and the most relevant.â
âPeter Kaufman
The larger and more relevant the sample size, the more reliable the model based on it is. But the key to sample sizes is to look for them not just over space, but over time.
Removing blind spots means thinking through the problem using different lenses or models. When we do this the blind spots slowly go away and we gain an understanding of the problem.
Expanding your latticework of mental models
A latticework is an excellent way to conceptualize mental models. It demonstrates the value of interconnecting knowledge.
The world is not categorised by discrete disciplines. We only break it down that way because it makes it easier to study it. But once we learn something, we must put it back into the complex system in which it occurs. We must see where it connects to other bits of knowledge, to build our understanding of the whole.
âI think it is undeniably true that the human brain must work in models. The trick is to have your brain work better than the other personâs brain because it understands the most fundamental models: ones that will do most work per unit. If you get into the mental habit of relating what youâre reading to the basic structure of the underlying ideas being demonstrated, you gradually accumulate some wisdom.â
âCharlie Munger
You must be deliberate about choosing the models you will use in a situation. As you use them, a great practice is to record and reflect. This helps you improve at both choosing models and applying them. Take the time to notice how you applied them, what the process was like, and what the results were. Over time, you will develop your knowledge of which situations are best tackled through which models.
Donât give up on a model if it doesnât help you right away. Learn more about it, and figure out why it didnât work. Maybe you have to improve your understanding. Or there were aspects of the situation that you didnât consider. Or your focus was on the wrong variable.
Keep a journal. Write your experiences down.
When you identify a model at work in the world, write that down too. Then you can explore the applications youâve observed, and start being more in control of the models you use every day. For instance, instead of falling victim to confirmation bias, you will be able to step back and see it at work in yourself and others. Once you get practice, you will start to naturally apply models as you go through your life, from reading the news to contemplating a career move.
General Thinking Concepts
1. The Map is not the Territory
The only way we can navigate the complexity of reality is through some abstraction like a map. When we read the news, weâre consuming abstractions created by other people.
We can lose the specific and relevant details that were distilled into an abstraction. And, because we often consume these abstractions as gospel, without doing the mental work ourselves, itâs tricky to see when the map no longer agrees with the territory. We inadvertently forget that the map is not reality.
Frequently, we donât remember that maps and models are abstractions. We thus fail to understand their limits. We forget there is a territory that exists separately from the map. We run into problems when our knowledge becomes of the map, rather than the actual underlying territory it describes.
When we mistake the map for reality, we start to think we have all the answers. We create static rules or policies that deal with the map but forget that we exist in a constantly changing world. When we close off or ignore feedback loops, we donât see the terrain has changed and we dramatically reduce our ability to adapt to a changing environment. Reality is messy and complicated, so our tendency to simplify it is understandable. However, if the aim becomes simplification rather than understanding we start to make bad decisions.
We canât use maps as dogma. Maps and models are not meant to live forever as static references. The world is dynamic. As territories change, our tools to navigate them must be flexible to handle a wide variety of situations or adapt to the changing times. If the value of a map or model is related to its ability to predict or explain, then it needs to represent reality. If reality has changed the map must change.
This is a double problem.
- Having a general map, we may assume that if a territory matches the map in a couple of respects it matches the map in all respects.
- We may think adherence to the map is more important than taking in new information about a territory.
One of the main values of using models as maps is in the thinking that is generated. They are tools for exploration, not doctrines to force conformity.
âRemember that all models are wrong; the practical question is how wrong do they have to be to not be useful.â
âGeorge Box
To use a map or model as accurately as possible, we should take three important considerations into account:
- Reality is the ultimate update.
- Consider the cartographer.
- Maps can influence territories.
Reality is the ultimate update: When we enter new and unfamiliar territory itâs nice to have a map on hand. But territories change, sometimes faster than the maps and models that describe them. We must update them based on our experiences in the territory. Thatâs how good maps are built: feedback loops created by explorers. We can think of stereotypes as maps. Sometimes they are useful, but they can also be dangerous. We must not forget that people have far more territory than a stereotype can represent.
Consider the cartographer: Maps are not purely objective. They reflect their creatorâs values, standards, and limitations.
Conclusion
Maps are valuable tools to pass on knowledge. But, in using maps, abstractions, and models, we must be aware of their limitations. While they help us understand and relate to the world around us, they are flawed and we must think beyond the map in order to think a few steps.
2. Circle of Competence
âIâm no genius. Iâm smart in spotsâbut I stay around those spots.â
âThomas Watson
We have blind spots when ego drives what we undertake. If you know what you understand, you know where you have an edge over others. When you are honest about where your knowledge is lacking you know where you are vulnerable and where you can improve. Understanding your circle of competence improves decision-making and outcomes.
If you donât have at least a few years and a few failures under your belt, you cannot consider yourself competent in a circle.
Within our circles of competence, we know exactly what we donât know. We can make decisions quickly and accurately. We possess detailed knowledge of additional information we might need to make a decision with full understanding, or even what information is unobtainable. We know what is knowable and what is unknowable and can distinguish between the two.
A circle of competence cannot be built quickly. It isnât the result of taking a few courses or working at something for a few months. It requires more than skimming the surface.
In Alexander Popeâs poem âAn Essay on Criticism,â he writes:
"A little learning is a dangerous thing;
Drink deep, or taste not the Pierian spring:
There shallow draughts intoxicate the brain,
And drinking largely sobers us again.â
There is no shortcut to understanding. Building a circle of competence takes years of experience, of making mistakes, and of actively seeking out better methods of practice and thought.
How do you build and maintain a circle of competence?
You must never take your circle of competence for granted. A circle of competence is not a static thing. The world is dynamic. Knowledge gets updated, and so too must your circle.
There are three key practices needed in order to build and maintain a circle of competence: curiosity and a desire to learn, monitoring, and feedback.
First, you must be willing to learn. Learning comes when experience meets reflection. You can learn from your own experiences. Or from the experience of others through books, articles, and conversations. Learning everything on your own is costly and slow. Learning from the experiences of others is much more productive. You must always approach your circle with curiosity, seeking information that can help you expand and strengthen it.
âLearn from the mistakes of others. You canât live long enough to make them all yourself.â
âAnonymous
Second, you must monitor your track record in areas which you have, or want to have, a circle of competence. And you must have the courage to monitor honestly so the feedback can be used to your advantage.
We have difficulty with overconfidence. Studies show most of us are much worse drivers, lovers, managers, (and many other things) than we think we are. We have a problem with honest self-reporting.
Keeping a journal of your own performance is the easiest and most private way to give self-feedback. Journals allow you to step out of your automatic thinking and ask yourself: What went wrong? How could I do better? Monitoring your own performance allows you to see patterns that you simply couldnât see before. This type of analysis is painful for the ego, which is also why it helps build a circle of competence. You canât improve if you donât know what youâre doing wrong.
Finally, you must occasionally solicit external feedback. This helps build a circle, but is also critical for maintaining one.
A lot of professionals have an ego problem: their view of themselves does not line up with the way other people see them. Before people can change they need to know these outside views. We need to go to people we trust, who can give us honest feedback about our traits. These people are in a position to observe us operating within our circles, and are thus able to offer relevant perspectives on our competence. Another option is to hire a coach.
It is extremely difficult to maintain a circle of competence without an outside perspective. We usually have too many biases to solely rely on our own observations. It takes courage to solicit external feedback, so if defensiveness starts to manifest, focus on the result you hope to achieve.
How do you operate outside a circle of competence?
Part of successfully using circles of competence includes knowing when we are outside themâwhen we are not well equipped to make decisions. We canât be inside a circle of competence in everything, so what do we do?
There are three parts to successfully operating outside a circle of competence:
- Learn the basics of the realm youâre operating in, while acknowledging youâre not in a circle of competence.
- Talk to someone whose circle of competence in the area is strong. Do a bit of research and define questions you need to ask, and what information you need, to make a good decision. Furthermore, when you need the advice of others, especially in higher stakes situations, ask questions to probe the limits of their circles. Then ask yourself how the situation might influence the information they choose to provide you.
- Use a broad understanding of the basic mental models of the world to augment your limited understanding of the field in which you find yourself outside a circle of competence.
1. Supporting Idea: Falsifiability
Karl Popper wrote âA theory is part of empirical science if and only if it conflicts with possible experiences and is therefore in principle falsifiable by experience.â The idea here is that if you canât prove something wrong, you canât really prove it right.
In a true science, as opposed to a pseudo-science, the following statement can be easily made: âIf x happens, it would show demonstrably that theory y is not true.â We can then design an experiment to figure out if x actually does happen.
âA theory is part of empirical science if and only if it conflicts with possible experiences and is therefore in principle falsifiable by experience.â
âKarl Popper
Trend is not destiny. Even if we can derive and understand certain laws of human biological nature, the trends of history itself are dependent on conditions, and conditions change.
Therefore, we must prepare more for the extremes allowable by physics rather than what has happened until now.
Applying the filter of falsifiability helps us sort through which theories are more robust. If they canât ever be proven false because we have no way of testing them, then the best we can do is try to determine their probability of being true.
3. First Principles Thinking
First principles thinking is one of the best ways to reverse-engineer complicated situations and unleash creative possibility. It helps clarify complicated problems by separating the underlying ideas or facts from any assumptions based on them. What remains are the essentials. If you know the first principles of something, you can build the rest of your knowledge around them to produce something new.
We want to identify the principles in a situation to cut through the dogma and the shared belief. There are two techniques we can use: Socratic questioning and the Five Whys.
Socratic questioning generally follows this process:
- Clarifying your thinking and explaining the origins of your ideas. (Why do I think this? What exactly do I think?)
- Challenging assumptions. (How do I know this is true? What if I thought the opposite?)
- Looking for evidence. (How can I back this up? What are the sources?)
- Considering alternative perspectives. (What might others think? How do I know I am correct?)
- Examining consequences and implications. (What if I am wrong? What are the consequences if I am?)
- Questioning the original questions. (Why did I think that? Was I correct? What conclusions can I draw from the reasoning process?)â
The Five Whys is a method rooted in the behavior of children who repeatedly ask âwhy?â. We ask âWhy?â five times. The Five Whys is about systematically delving further into a statement or concept so that you can separate reliable knowledge from assumption. If your âwhysâ result in a statement of falsifiable fact, you have hit a first principle. If they end up with a âbecause I said soâ or âit just isâ, you know you have landed on an assumption that may be based on popular opinion, cultural myth, or dogma. These are not first principles.
4. Thought Experiment
Experimenting to discover the full spectrum of possible outcomes gives you a better appreciation for what you can influence and what you can reasonably expect to happen.
Thought experiments are tremendously useful in the following areas.
- Imagining physical impossibilities
- Re-imagining history
- Intuiting the non-intuitive
Imagining physical impossibilities: this tool helps us solve problems with intuition and logic that cannot be demonstrated physically. While Einstein used this tool, it doesnât only need to apply to physics. When we say âif money were no objectâ or âif you had all the time in the world,â we are asking someone to conduct a thought experiment. Actually removing that variable (money or time) is physically impossible. But detailing the choices weâd make in these alternate realities is what leads to insights regarding what we value in life and where to focus.
Re-imagining history: If Y happened instead of X, what would the outcome have been? Would the outcome have been the same? As popularâand generally usefulâas counter- and semi-factuals are, they are also the areas of thought experiment with which we need to use the most caution. Why? Because history is what we call a chaotic system. A small change in the beginning conditions can cause a very different outcome down the line. This is where the rigor of the scientific method is indispensable if we want to draw conclusions that are actually useful.
Reduce the Role of Chance
An example of this is the famous âveil of ignoranceâ proposed by philosopher John Rawls in his influential Theory of Justice. In order to figure out the most fair and equitable way to structure society, he proposed that the designers of said society operate behind a veil of ignorance. This means that they could not know who they would be in the society they were creating. If they designed the society without knowing their economic status, their ethnic background, talents and interests, or even their gender, they would have to put in place a structure that was as fair as possible in order to guarantee the best possible outcome for themselves.
Our initial intuition of what is fair is likely to be challenged during the âveil of ignoranceâ thought experiment. When confronted with the question of how best to organize society, we have this general feeling that it should be fair. But what exactly does this mean? We can use this thought experiment to test the likely outcomes of different rules and structures to come up with an aggregate of âmost fair.â
We need not be constructing the legislation of entire nations for this type of thinking to be useful. Think, for example, of a companyâs human resources policies on hiring, office etiquette, or parental leave. What kind of policies would you design or support if you didnât know what your role in the company was? Or even anything about who you were?
Conclusion
Thought experiments tell you about the limits of what you know and the limits of what you should attempt. In order to improve our decision-making and increase our chances of success, we must be willing to probe all of the possibilities we can think of. Thought experiments are not daydreams. They require both rigor and work. But the more you use them, the more you understand actual cause and effect, and the more knowledge you have of what can really be accomplished.
2. Supporting Idea: Necessity and Sufficiency
We often make the mistake of assuming that having some necessary conditions in place means that we have all of the sufficient conditions in place for our desired event or effect to occur.
In mathematics they call these sets. The set of conditions necessary to become successful is a part of the set that is sufficient to become successful. But the sufficient set itself is far larger than the necessary set. Without that distinction, itâs too easy for us to be misled by the wrong stories.
5. Second-Order Thinking
Almost everyone can anticipate the immediate results of their actions. This type of first-order thinking is easy and safe. But it also ensures you get the same results as everyone else.
Second-order thinking is thinking ahead and thinking holistically. It requires us to not only consider our actions and their immediate consequences, but the subsequent effects of those actions as well.
It is often easier to find examples of when second-order thinking didnât happenâwhen people did not consider the effects of the effects. When they tried to do something good, or even just benign, and instead brought calamity, we can safely assume the negative outcomes werenât factored into the original thinking. This is often referred to as the âLaw of Unintended Consequencesâ.
Any comprehensive thought process considers the effects of the effects as seriously as possible. You are going to have to deal with them anyway.
High degrees of connections make second-order thinking all the more critical, because denser webs of relationships make it easier for actions to have far-reaching consequences. You may be focused in one direction, not recognizing that the consequences are rippling out all around you. Things are not produced and consumed in a vacuum.
âWhen we try to pick out anything by itself, we find it hitched to everything else in the Universe.â
âJohn Muir
If weâre interested in understanding how the world really works, we must include second and subsequent effects. We must be as observant and honest as we can about the web of connections we are operating in.
Second-Order Problem
Warren Buffett used a very apt metaphor to describe how the second-order problem is best described by a crowd at a parade: Once a few people decide to stand on their tip-toes, everyone has to stand on their tip-toes. No one can see any better, but theyâre all worse off.
Two areas where second-order thinking can be used to great benefit:
- Prioritizing long-term interests over immediate gains
- Constructing effective arguments
Second-order thinking and realizing long-term interests: Useful for seeing past immediate gains to identify desired long-term effects. Second-order thinking involves asking ourselves if what we are doing now is going to get us the results we want. This is often a conflict for us, as we must choose to forgo immediate pleasure to improve long-term results. We must ask âis this what I want my life to look like in ten years?â.
Finding historical examples of second-order thinking can be tricky. We donât want to evaluate solely on outcome: just because it turned out well doesnât mean someone thought through the consequences of their actions.
Being aware of second-order consequences and using them to guide your decision-making may mean the short term is less spectacular, but the long-term payoffs can be enormous. By delaying gratification, you will save time in the future. You wonât have to clean up the mess you made on account of not thinking through the effects of your short-term desires.
Constructing effective arguments: Second-order thinking can help you avert problems and anticipate challenges that you can then address in advance. Arguments are more effective when we demonstrate that we have considered the second-order effects and put effort into verifying that these are desirable as well.
We must avoid the analysis paralysis second-order thinking can lead to. Second-order thinking needs to evaluate the most likely effects and their most likely consequences, checking our understanding of what the typical results of our actions will be. If we worried about all the possible effects of our actions, weâd never do anything, and weâd be wrong. How youâll balance the need for higher-order thinking with practical, limiting judgment must be taken on a case-by-case basis.
Conclusion
We donât make decisions in a vacuum and we canât get something for nothing. When making choices, considering consequences can help us avoid future problems. We must ask ourselves the critical question: And then what?
Consequences come in many varieties, some more tangible than others. Thinking in terms of the system in which youâre operating allows you to see that your consequences have consequences. Thinking through a problem as far as you can with the information you have allows us to consider time, scale, thresholds, and more. A little time spent thinking ahead can save us massive amounts of time later.
6. Probabilistic Thinking
In a world where each moment is determined by an infinitely complex set of factors, probabilistic thinking helps us identify the most likely outcomes. It helps us make more precise and effective decisions.
We know now that the future is inherently unpredictable because not all variables can be known and even the smallest error imaginable in our data very quickly throws off our predictions. The best we can do is estimate the future by generating realistic, useful probabilities.
Three important aspects of probability can be integrated into your thinking:
- Bayesian thinking
- Fat-tailed curves
- Asymmetries
Bayesian thinking: allows us to use all relevant prior information in making decisions. It is important to note that priors themselves are probability estimates. For each bit of prior knowledge, you are not putting it in a binary structure, saying it is true or not. Youâre assigning it a probability of being true. Therefore, you canât let your priors get in the way of processing new knowledge.
Any new information you encounter that challenges a prior simply means that the probability of that prior being true may be reduced. Eventually some priors are replaced completely. This is an ongoing cycle of challenging and validating what you believe you know. When making uncertain decisions, itâs nearly always a mistake not to ask: What are the relevant priors? What might I already know that I can use to better understand the reality of the situation?
Conditional Probability
Conditional probability is similar to Bayesian thinking in practice. When you use historical events to predict the future, you should be mindful of the conditions surrounding that event. Using conditional probability means being careful to observe the conditions preceding an event youâd like to understand.
Fat-tailed curves: are similar to the bell curve, where common outcomes cluster together. The difference is in the tails. In a bell curve, the extremes are predictable and there can only be so much deviation from the mean. In a fat-tailed curve, there is no limit on extreme events.
The more extreme events that are possible, the longer the tails of the curve get. Any one extreme event is still unlikely, but the sheer number of options means we canât rely on the most common outcomes as representing the average. The more extreme events that are possible, the higher the probability that one of them will occur. Crazy things are definitely going to happen, and we have no way of identifying when.
The important thing is not to sit down and imagine every possible scenario in the tail (by definition, it is impossible) but to deal with fat-tailed domains in the correct way: by positioning ourselves to survive or even benefit from the wildly unpredictable future, by being the only ones thinking correctly and planning for a world we donât fully understand.
Asymmetries: Finally, you need to think about something we might call âmetaprobabilityââthe probability that your probability estimates themselves are any good.
Orders of Magnitude
Nassim Taleb puts his finger in the right place when he points out our naive use of probabilities. In The Black Swan, he argues that any small error in measuring the risk of an extreme event can mean weâre not just slightly off, but way offâoff by orders of magnitude, in fact. In other words, not just 10% wrong but ten times wrong, or 100 times wrong, or 1,000 times wrong. Something we thought could only happen every 1,000 years might be likely to happen in any given year! This is using false prior information and results in us underestimating the probability of the future distribution being different.
Anti-fragility
How do we benefit from the uncertainty of a world we donât understand, one dominated by âfat tailsâ?
We can think about three categories of objects: Ones that are harmed by volatility, ones that are neutral to volatility, and ones that benefit from it. The latter category is antifragile. Up to a point, certain things benefit from volatility, and thatâs how we want to be. Why? Because the world is fundamentally unpredictable and volatile, and large eventsâpanics, crashes, wars, bubbles, and so onâtend to have a disproportionate impact on outcomes.
There are two ways to handle such a world: try to predict, or try to prepare. Prediction is tempting. For all of human history, seers and soothsayers have turned a comfortable trade. The problem is that nearly all studies of âexpertâ predictions in such complex real-world realms as the stock market, geopolitics, and global finance have proven again and again that, for the rare and impactful events in our world, predicting is impossible! Itâs more efficient to prepare.
What are some ways we can prepareâarm ourselves with antifragilityâso we can benefit from the volatility of the world?
The first one is what Wall Street traders call âupside optionalityâ. Seeking situations that we expect have good odds of offering us opportunities. Take the example of attending a cocktail party where a lot of people you might like to know are in attendance. While nothing is guaranteed to happen, you give yourself the benefit of serendipity and randomness. The worst thing that can happen is...nothing. One thing you know for sure is that youâll never meet them sitting at home. By going to the party, you improve your odds of encountering opportunity.
The second thing we can do is to learn how to fail properly. Failing properly has two major components. First, never take a risk that will take you out of the game completely. Second, develop the personal resilience to learn from your failures and start again.
No one likes to fail. But failure carries learning. What those who are not afraid to fail learn makes them less vulnerable to the volatility of the world. They benefit from it, in true antifragile fashion.
The Antifragile mindset is a unique one. Whenever possible, try to create scenarios where randomness and uncertainty are your friends, not your enemies.
Another common asymmetry is peopleâs ability to estimate the effect of traffic on travel time. How often do you leave âon timeâ and arrive 20% early? Almost never? How often do you leave âon timeâ and arrive 20% late? All the time? Exactly. Your estimation errors are asymmetric, skewing in a single direction. This is often the case with probabilistic decision-making.
Far more probability estimates are wrong on the âover-optimisticâ side than the âunder-optimisticâ side.
Conclusion
Successfully thinking in shades of probability means roughly identifying what matters, coming up with a sense of the odds, doing a check on our assumptions, and then making a decision. We can act with a higher level of certainty in complex, unpredictable situations. We can never know the future with exact precision. Probabilistic thinking is an extremely useful tool to evaluate how the world will most likely look so that we can effectively strategize.
3. Supporting Idea: Causation vs. Correlation
Confusion between these two terms often leads to inaccurate assumptions. We notice two things happening at the same time (correlation) and mistakenly conclude that one causes the other (causation). We then act upon that erroneous conclusion, making decisions that can have immense influence across our lives. But without a good understanding of these terms, our decisions fail to capitalize on real dynamics and are instead only successful by luck.
Whenever correlation is imperfect, extremes will soften over time. The best will always appear to get worse and the worst will appear to get better, regardless of any additional action. This is âregression to the meanâ. It means we must be extra careful when diagnosing causation.
We often mistakenly attribute a specific policy or treatment as the cause of an effect, when the change in the extreme groups would have happened anyway. This presents a fundamental problem: how can we know if the effects are real or simply due to variability?
For example: Depressed children are an extreme group, they are more depressed than most other childrenâand extreme groups regress to the mean over time. The correlation between depression scores on successive occasions of testing is less than perfect, so there will be regression to the mean: depressed children will get somewhat better over time even if they hug no cats and drink no Red Bull.
Luckily there is a way to tell between a real improvement and something that would have happened anyway. That is the introduction of the so-called control group, which is expected to improve by regression alone. The aim of the research is to determine whether the treated group improves more than regression can explain.
In real life situations with the performance of specific individuals or teams, where the only real benchmark is the past performance and no control group can be introduced, the effects of regression can be difficult if not impossible to disentangle. We can compare against industry average, peers in the cohort group or historical rates of improvement, but none of these are perfect measures.
7. Inversion
âThe test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function. One should, for example, be able to see that things are hopeless yet be determined to make them otherwise.â
âF. Scott Fitzgerald
Inversion helps you identify and remove obstacles to success. Most of us tend to think one way about a problem: forward. Inversion allows us to flip the problem around and think backward. Sometimes itâs good to start at the beginning, but it can be more useful to start at the end.
There are two approaches to applying inversion in your life.
- Start by assuming that what youâre trying to prove is either true or false, then show what else would have to be true.
- Instead of aiming directly for your goal, think deeply about what you want to avoid and then see what options are left over.
Set your assumptions: Edward Bernays was a marketing executive in the 1920s at the American Tobacco Company. Instead of asking âHow do I sell more cigarettes to womenâ, he wondered, if women bought and smoked cigarettes, what else would have to be true? What would have to change in the world to make smoking desirable to women and socially acceptable? Thenâa step fartherâonce he knew what needed to change, how would he achieve that?
This inversion approach became a staple of Bernaysâs work. He used the descriptor âappeals of indirectionâ, and each time when hired to sell a product or service, he instead sold whole new ways of behaving, which appeared obscure but over time reaped huge rewards for his clients and redefined the very texture of American life.
What are you trying to avoid? Instead of thinking through the achievement of a positive outcome, we could ask ourselves how we might achieve a terrible outcome, and let that guide our decision-making.
Index funds are a great example of stock market inversion promoted and brought to bear. Instead of asking how to beat the market, John Bogle (Vanguard founder) asked how we can help investors minimize losses to fees and poor money manager selection? The results were one of the greatest ideasâindex fundsâand one of the greatest powerhouse firms in the history of finance.
The index fund operates on the idea that accruing wealth has a lot to do with minimizing loss. Think about your personal finances. Often we focus on positive goals, such as âI want to be rich,â and use this to guide our approach. We make investing and career choices based on our desire to accumulate wealth. We chase after magical solutions, like attempting to outsmart the stock market. These inevitably get us nowhere, and we have usually taken some terrible risks in the process which actually leave us worse off.
One of the theoretical foundations for this type of thinking comes from psychologist Kurt Lewin. In the 1930s he came up with the idea of force field analysis, which essentially recognizes that in any situation where change is desired, successful management of that change requires applied inversion. Here is a brief explanation of his process:
- Identify the problem
- Define your objective
- Identify the forces that support change towards your objective
- Identify the forces that impede change towards the objective
- Strategize a solution! This may involve both augmenting or adding to the forces in step 3, and reducing or eliminating the forces in step 4.
The inversion happens between steps 3 and 4. Whatever angle you choose to approach your problem from, you need to then follow with consideration of the opposite angle. Think about not only what you could do to solve a problem, but what you could do to make it worseâand then avoid doing that, or eliminate the conditions that perpetuate it.
âHe wins his battles by making no mistakes.â
âSun Tzu
Conclusion
Inversion shows us that we donât always need to be geniuses, nor do we need to limit its application to mathematical and scientific proofs. Simply invert, always invert, when you are stuck. If you take the results of your inversion seriously, you might make a great deal of progress on solving your problems.
8. Occamâs Razor
âAnybody can make the simple complicated. Creativity is making the complicated simple.â
âCharles Mingus
Simpler explanations are more likely to be true than complicated ones. Instead of wasting your time trying to disprove complex scenarios, you can make decisions more confidently by basing them on the explanation with the fewest moving parts.
We often spend lots of time creating complicated narratives to explain whatâs around us. Itâs a common human tendency thatâs served us well in some situations. But complexity takes work to unravel, manage, and understand.
Occamâs Razor is a great tool to avoid unnecessary complexity. It helps you identify and commit to the simplest explanation possible.
The medieval logician William of Ockham wrote: âa plurality is not to be posited without necessityââessentially that we should prefer the simplest explanation with the fewest moving parts. They are easier to falsify, easier to understand, and generally more likely to be correct.
Occamâs Razor is not an iron law but a tendency and a mind-frame you can choose to use: If all else is equal, that is if two competing models both have equal explanatory power, itâs more likely that the simple solution suffices.
As scientist and writer Carl Sagan explains in The Demon Haunted World:
A multitude of aspects of the natural world that were considered miraculous only a few generations ago are now thoroughly understood in terms of physics and chemistry. At least some of the mysteries of today will be comprehensively solved by our descendants. The fact that we cannot now produce a detailed understanding of, say, altered states of consciousness in terms of brain chemistry no more implies the existence of a âspirit worldâ than a sunflower following the Sun in its course across the sky was evidence of a literal miracle before we knew about phototropism and plant hormones.
The simpler explanation for a miracle is that there are principles of nature being exploited that we do not understand.
Sagan wrote that âextraordinary claims require extraordinary proof.â He dedicated much ink to a rational investigation of extraordinary claims. He felt most, or nearly all, were susceptible to simpler and more parsimonious explanations. UFOs, paranormal activity, telepathy, and a hundred other seemingly mystifying occurrences could be better explained with a few simple real world variables. And if they couldnât, it was a lot more likely that we needed to update our understanding of the world than that a miracle had occurred.
Simplicity can increase efficiency
With limited time and resources, it is not possible to track every theory with a plausible explanation of a complex, uncertain event. Without the filter of Occamâs Razor, we waste time, resources, and energy.
The great thing about simplicity is that it can be so powerful. Sometimes unnecessary complexity just papers over the systemic flaws that will eventually choke us. Opting for the simple helps us make decisions based on how things really are.
A few caveats
One important counter to Occamâs Razor is the difficult truth that some things are simply not that simple. The regular recurrence of fraudulent human organizations like pyramid schemes and Ponzi schemes is not a miracle, but neither is it obvious. No simple explanation suffices. They are a result of a complex set of behaviors, some happening almost by accident or luck, and some carefully designed with the intent to deceive. It isnât a bit easy to spot the development of a fraud. If it was, theyâd be stamped out early.
Simple as we wish things were, irreducible complexity, like simplicity, is a part of our reality. Therefore, we canât use this Razor to create artificial simplicity. If something cannot be broken down any further, we must deal with it as it is.
Conclusion
Focusing on simplicity when all others are focused on complexity is a hallmark of genius, and itâs easier said than done. But always remembering that a simpler explanation is more likely to be correct than a complicated one goes a long way towards helping us conserve our most precious resources of time and energy.
9. Hanlonâs Razor
âI need to listen well so that I hear what is not said.â
âThuli Madonsela
Hanlonâs Razor states we should not attribute to malice that which is more easily explained by stupidity. In a complex world, using this model helps us avoid paranoia and ideology. By not generally assuming that bad results are the fault of a bad actor, we look for options instead of missing opportunities.
This model demands that we ask if there is another reasonable explanation for the events that have occurred. The explanation most likely to be right is the one that contains the least amount of intent.
Evidenced by Daniel Kahneman and Amos Tversky, there is a tic in our mental machinery. Weâre deeply affected by vivid, available evidence, to such a degree that weâre willing to make judgments that violate simple logic. We over-conclude based on the available information. We have no trouble packaging in unrelated factors if they happen to occur in proximity to what we already believe.
When we see something we donât like that seems wrong, we assume itâs intentional. But itâs more likely that itâs completely unintentional. Most people doing wrong are not bad people trying to be malicious.
With such vividness, and the associated emotional response, comes a sort of malfunctioning in our minds when weâre trying to diagnose the causes of a bad situation.
Failing to prioritize stupidity over malice causes things like paranoia. Always assuming malice puts you at the center of everyone elseâs world. This is an incredibly self-centered approach to life. In reality, for every act of malice, there is almost certainly far more ignorance, stupidity, and laziness.
âOne is tempted to define man as a rational animal who always loses his temper when he is called upon to act in accordance with the dictates of reason.â
âOscar Wilde
Hanlonâs Razor, when practiced diligently as a counter to confirmation bias, empowers us, and gives us far more realistic and effective options for remedying bad situations. When we assume someone is out to get us, our very natural instinct is to take actions to defend ourselves. Itâs harder to take advantage of, or even see, opportunities while in this defensive mode because our priority is saving ourselvesâwhich tends to reduce our vision to dealing with the perceived threat instead of examining the bigger picture.
The Devil Fallacy
You have attributed conditions to villainy that simply result from stupidityâŚ. You think bankers are scoundrels. They are not. Nor are company officials, nor patrons, nor the governing classes back on earth. Men are constrained by necessity and build up rationalizations to account for their acts.
Conclusion
Itâs important not to overthink this model. Hanlonâs Razor is meant to help us perceive stupidity or error, and their inadvertent consequences. It says that of all possible motives behind an action, the ones that require the least amount of energy to execute (such as ignorance or laziness) are more likely to occur than one that requires active malice.
Hanlonâs Razor demonstrates there are fewer true villains than you might supposeâ people are human and all humans make mistakes and fall into traps of laziness, bad thinking, and bad incentives. Our lives are easier, better, and more effective when we recognize this truth and act accordingly.