Below is a lightly edited transcript. For the article that inspired this one, see here. For related reading, see here and here.
Welcome to the btrmt. Lectures. My name is Dr. Dorian Minors, and if there is one thing I’ve learnt as a brain scientist, it is that there is no instruction manual for this device in our heads. But there are patterns—patterns of thought, patterns of feeling, and patterns of action—because that’s what brains do. So let me teach you about them. One pattern, one podcast, and you see if it works for you.
Now, if you’ve been following along in this little lecture series of mine so far, you’ll know that one of my favourite things to do is to strip away the pop psychology nonsense around neuroscience and try and show you a better way of thinking about how the brain influences behaviour. I’ve done one on stress before and how, properly conceived, stress is actually a pretty good thing. And I’ll talk more about that in this lecture, too. And I’ve done another one about how anyone who talks about the amygdala as the fear centre of the brain is distracting you from what’s really going on.
This lecture is in a similar vein, but I think it’s a little bit bigger than these more focused examples, because I’m not just talking here about one brain structure or one biological mechanism. I want to talk about how we think about thinking itself.
And you’ve probably heard something along the lines of what I want to speak about today. The idea that bias—cognitive bias—is a problem and we should avoid it at all costs in the way that we think. It’s an idea that I’ve had to wade through in my clinical work, something that dominates the business world, the world of management consulting, and it’s even something that we teach here at the Royal Military Academy Sandhurst. And the way that it is usually taught is a real problem for anyone trying to make fewer errors in their thinking.
Now, normally when I mention Sandhurst in these lectures, I like to do a little disclaimer that this is my own perspective, my own little podcast, nothing to do with Sandhurst. But in actual fact, fortunately, this is something that I can and have been changing as Associate Professor here. So let me tell you how bias should be taught, how it is taught now at a premier leadership institution like Sandhurst, and frankly, how I think it should be taught everywhere.
So. Bias. Let’s get into it.
The pop psychology version
If you’ve ever done some kind of psychology course—online, EdX or YouTube, or a first-year university class—or if you’ve ever done any kind of professional development in the last twenty years or so, leadership training, a management course, anything with a corporate facilitator, then you’ve probably been told that bias is a bad thing. You’ve got these cognitive biases, these flaws in your thinking, and if you could just be a little bit more rational, you would make better decisions.
Now, when I write articles at btrmt., I like to use Wikipedia a lot because it’s normally pretty good for this sort of thing. But in this case, Wikipedia is also a victim of the pop psychologists I’m about to complain about. If you’re unfamiliar with the concept, you can go there and you’ll find a catalogue of something like 200 or so cognitive biases, complete with this beautiful image, the Cognitive Bias Codex, this wheel of all the ways your brain is supposedly failing you.
And a lot of this largely comes from a behavioural economist named Daniel Kahneman, who wrote a book in 2011 called Thinking Fast and Slow. It’s probably one of the most popular psychology books ever written, if not the most popular. And in this book, he talks about System One and System Two. A fast, automatic way of processing information—System One—and this slower, more deliberate way of thinking—System Two.
If you haven’t heard of System One and System Two, you’ve probably heard the analogues. People say things like hot and cold thinking, or Kahneman calls it fast and slow thinking, Type One and Type Two thinking. People also describe it as “get out of the amygdala and into the frontal lobes,” or “get out of the sympathetic and into the parasympathetic nervous system,” or something something vagus nerve. All of this stuff is the same thing dressed up in different words.
And if you haven’t heard of these analogues, you’ll probably have an intuitive sense of the idea, and I’ll try and illustrate it for you. Let’s say I asked you: what is two plus two? You’d come up with the answer—four, I hope. You probably didn’t have to think about it. It’s automatic. But if I asked you what’s 136 by 365, you’d have to stop, right? You’d have to think about it for a little bit and slow down as you work through the process. One form of thinking is automatic, fast, and intuitive. The other is slow, deliberate, and more effortful.
And it’s this fast thinking that Kahneman and other dual process theorists would tell us produces cognitive shortcuts—ways of working out the world quickly and automatically according to rules of thumb. So, for example, telling where a noise came from quickly so that we can respond without thinking, or allowing us to read the text on a billboard while we’re driving past. These kinds of quick rules of thumb that we’d be useless without, because the slow thinking is super costly. Multiplying 136 by 365 costs effort in a way that two plus two doesn’t. Slow thinking engages a whole bunch of cognitive infrastructure that’s difficult to run. And probably more importantly, you wouldn’t want to have to work out everything from first principles all the time. You’d never get anything done.
So the idea is that we offload all this more difficult stuff to the fast thinking, the cheap thinking, where we can—where it’s predictable, where we can make a rule of thumb about it. And the reason that Kahneman is the most popular dual process theorist is because he was interested in a very specific thing: where these shortcuts go wrong. He called these moments biases—when the cognitive shortcuts, or as he called them, heuristics, are off target.
So, for example, you’ve got something called the availability heuristic. This is a type of fast thinking that you do when you make decisions based on the data available to you. What kills more people—shark attacks or taking selfies? Your fast thinking would have you answer: of course it’s shark attacks, right? But you’ve probably guessed, because I’m using it as an example, that it’s taking selfies. Deaths related to selfies outnumber shark attacks by, I think, an order of magnitude. You probably want to check me on that. But certainly deaths related to shark attacks are vanishingly small. The reason we get it wrong is because we don’t get nearly as much media around selfie-related deaths as we do about shark attacks. Shark attacks are more available—hence the availability heuristic.
Now, this is the kind of thing behavioural economists like Daniel Kahneman are interested in. But because they’re interested in them, behavioural economists have gone on to document this enormous quantity of biases. You have biases related to attitudes—your standards, equity and diversity concerns, prejudices based on stereotypes about cultures and social groups. But you also have biases related to your emotional attachments, or the limitations of your cognitive capacity and working memory. And like I said, we’re inching into this territory where we’re drowning in almost 300 biases that we have on hand to explain the halt, the lame, half-made creatures that we are.
And in the context of a system like this—fast thinking that leads to all these errors, these biases—then of course the assumption is always going to be something like, well, why don’t we hand things over to slow thinking? System Two. If our fast thinking is error-prone, then the more deliberate, effortful processes of logically working through the information are surely going to save us from these kinds of pitfalls.
In my lectures that talk about this concept, I like to use this excerpt from a Harvard Business Review article called something like “Outsmart Your Own Biases.” The quote goes: “It can be dangerous to rely too heavily on what experts call System One thinking—the automatic judgments that stem from associations stored in memory—instead of logically working through the information that’s available.”
And this is the idea. System One, fast thinking, is bad. Use System Two instead. And as I’m about to tell you, this is a complete misunderstanding.
If you can explain everything, you explain nothing
And it’s not just a pop psychology misunderstanding, either. Now it’s going out of fashion, but historically the entire field of behavioural economics has been built on this foundational idea that humans are rational actors who sometimes deviate from rational action. This is, you’ll be surprised to learn, called the rational actor model—the idea that when you make a decision, you’re optimising for your preferences, weighing up the costs and the benefits, coming up with the optimal decision. And the biases are the times that you deviate from this model, that you act irrationally, that you make choices that aren’t optimised even if you have the right information.
So the behavioural economists have come up with this enormous list of deviations from rationality—these biases. And I guess the idea is that if we can catalogue all of them, we can sort of sticky-tape them onto our model of human behaviour and predict behaviour better. But the question is: if you have 200 deviations from your model, at what point do you start wondering whether the model itself is the wrong model?
And there’s this great quote from an economist, Jason Collins, who says something like: suppose you’re trying to help granny save for her retirement and you want to help her make a better decision about this. Which of these 200 or so biases are going to lead her to make a mistake? How can you help her avoid the biases? Is she going to be loss-averse, present-biased, regret-averse, ambiguity-averse, overconfident? Is she going to neglect the base rate? Is she hungry? Which of these biases are actually going to help you help her to make a better decision? It’s impossible. Nobody’s going to pan through this endless list to figure out what’s going to go wrong and disentangle one from the other. It’s just not a very useful concept.
And people are discovering this. There’s this enormous study— the technical term is a mega study—that looked at almost 700,000 people and the kinds of behavioural interventions that would make them more likely to get vaccinated. I’m not going to get into the results because the results aren’t what matters. What’s more interesting about the study is that they asked behavioural economists to predict which interventions would work best, right? The people that came up with biases as a way of predicting human behaviour. And they couldn’t. They couldn’t predict which interventions would work best. And more to the point, random laypeople could. Slightly, but they could. Your average person off the street was better than the expert at predicting these things. So it almost feels like a knowledge of biases is a barrier to prediction, not an aid.
And again, you think back to granny saving for her retirement, you can kind of see why. As that economist Jason Collins said later in his article: if you can explain everything, you explain nothing. If somebody’s making a conservative choice, it’s loss aversion. If they make a risky choice, it’s overconfidence. If they chose fast, it’s anchoring. If they choose slow, it’s analysis paralysis. You’ve got a bias no matter what they do. That’s not a theory—that’s basically a horoscope.
Bias as a trade-off
So if a bias isn’t a deviation from rationality, the question becomes: what is it? And that’s where I think the interesting part is.
Maybe annoyingly, I want to take a little detour into statistics. Behavioural economists characterise biases as errors—deviations from the rational actor model where people make irrational decisions. But statisticians don’t see bias in the same way. They see bias as a trade-off. Essentially a trade-off in which you ignore noise in order to get better precision on the thing that you care about.
Let me try and put it into context for you. Say I’m trying to figure out who’s sleeping in one of my lecture theatres. I could do this in a couple of ways.
One thing I could do is pick people at random to figure out whether they’re asleep. I look at someone—are they sleeping, are they not? I look at another person—are they sleeping, are they not? And doing this, I’m probably going to catch one or two people sleeping. I’m doing a lecture about statistics, after all. But in a big lecture theatre, what are the chances that I’m going to catch somebody sleeping at the actual time that they’re sleeping? This is an example of me trying to figure out who’s sleepy in an unbiased way. I’m picking people at random, but it’s going to be pretty inaccurate because I’m not going to catch many of the people sleeping when they’re sleeping. This is a noisy way to figure out whether people are sleeping in my lecture theatre.
So maybe instead what I might try is to look at bunches of people all at the same time. I try and take in a cluster of the classroom with my eyes. This is probably going to be a little bit more accurate because I’m more likely to catch sleeping people than when I was looking at them one by one. I can see more of them at once. I can cover more of the classroom more quickly. But it’s still going to be pretty noisy. I can’t see everybody in the classroom at once in a 300-person room. And the closer people are to my peripheral vision, the less likely I’m able to make out the detail of their eyes. Still a pretty noisy way of figuring out who’s sleepy. Not really that biased—I’m still picking clusters at random—and it is a little bit more accurate.
But if I wanted to improve on this, what I’d probably do is bias my search. I might say something like: sleeping people are more likely to be at the back and the sides of the classroom, because people who come into the class planning to sleep aren’t going to sit right in front of me. And the people at the back are facing much less pressure from me screaming at them, trying to get my voice up the back of the room, right? So there’s less pressure on them to stay awake.
Here I’m biasing my search to look around the back and the sides of the room, ignoring the people in the centre. And now I’m much more likely to catch my sleeping pupils. Not all of them, of course, but many more of them than with the two more noisy ways of doing it. What I’m doing is optimising for precision, for accuracy. I’m ignoring the noise. I’m biasing what I’m doing in order to get a better result.
Of course, this could lead me to make certain kinds of errors. If I see somebody with their eyes closed up the back, and I think they’re sleeping—but maybe they’ve just got their eyes closed. They’re thinking about all the wisdom I’m sharing with them, or the sun is hitting them in the eyes. And it’s this latter case that behavioural economists are interested in when they talk about bias—this case where bias has led me to make a mistake.
But the thing that most people take away is that all kinds of bias are wrong, even the ones that work, even the ones that help us get more precision, more accuracy. And I should be clear: this isn’t what behavioural economists believe. They call biases that work heuristics. But this nuance isn’t really the kind of thing that tends to come away with people when they learn about bias. Instead, they come away with the idea that bias is uniformly an error. It’s a problem.
So I think this statistical way of thinking about bias is actually a much better way of doing it, because it’s closer to what the brain actually does. Just like I used my assumption about where sleepy people would go in my classroom to bias my search and catch more sleepers, the brain uses its assumptions, its expectations and its history of being in the world, to bias the way that you behave in order to produce more accurate behaviour. And it’s only when these expectations fail that it stops and recalculates and does something different. In the words of the behavioural economist, that’s when it switches on System Two, the slow thinking.
Bias and the stress curve
I’ll try and give you an example in real terms that follows on from my stress is good lecture. To remind you, the basic premise there is that where most people characterise stress as a bad thing, stress is actually just a motivating force. As stress goes up, your performance goes up. And that’s because stress is recruiting all the cognitive and attentional resources you need to complete the task at hand. If you have a project to complete next year that’s only going to take a week to complete, there’s no stress in the system. You’re not going to be motivated to do it. But if that project is due next week and you have a week to do it, then you’re probably going to have appropriate stress in the body to complete the task—you’re going to start performing. It’s only when stress goes up too much that your performance starts to decline. You start to get brain fog or the jitters. You start to get distractible and anxious. If your week-long project is due tomorrow, then you’re probably going to be less useful at performing the task. Too much stress.
Now that was the stress lecture. But we can think about this same thing in terms of bias. One of the reasons that you perform better at a task when the amount of stress in your body increases is because what the brain starts doing is limiting your attention to the task at hand. It’s biasing you to engage. You’re going to be much less likely to be concentrating on all the other things you could be doing, and instead you’re going to be focusing on the things you should be doing now.
In contrast, as there’s less stress in the body, you’re going to be paying more attention to the noise. You’re going to be exploring. You’re going to be tinkering with other projects. You’re going to be thinking about putting together a new theme for your slide deck. You’re going to be creative. Bias is the brain’s tool for ignoring noise in order to get more precision on the task at hand.
Fundamental beliefs, not 200 biases
All right, so all of that is hopefully at least a little bit interesting, but it still leaves us with a bit of a problem. The behavioural economists are still out there cataloguing their 200 or so biases, all these errors in behaviour, and we probably shouldn’t just ignore them. Most people aren’t really that enthusiastic about making errors. Luckily, I wouldn’t be doing this podcast if I didn’t have an answer for you. So let’s get into it.
Biases—both the behavioural economist term of art, but also the statistical term—depend on our expectations. They depend on our history of being in the world and what we expect to happen based on what’s going on now. Our assumptions, more or less. Which makes me think that rather than try and get distracted by some enormous number of biases, what might be a better thing to do is try and figure out what our assumptions are.
You might think that this is the same kind of mammoth task as cataloguing biases—cataloguing human assumptions—but it doesn’t need to be. And there’s this very interesting recent review that illustrates why this might be a better way of going about things. What they argue is that a huge number of the biases that behavioural economists catalogue boil down to basically just two things. Some fundamental belief, followed up by confirmation bias, or more precisely, belief-consistent information processing.
So the idea is that we don’t have 200 separate flaws. We have this handful of deep beliefs about how the world works, and then we process information consistently with those beliefs. Which, if I’ve been explaining myself right, is exactly what bias is. Consistency. Accuracy. Ignoring the noise for precision.
I’ll give you a couple of the clusters they describe. They say the first fundamental belief might be something like: my experience is a reasonable reference for the experience of everyone else. This one explains a bias known as the spotlight effect, where we overestimate how much others notice us. It explains the illusion of transparency, where we think our inner states are more visible than they are. It explains the false consensus effect, where we assume other people share our same perspective. And it explains the curse of knowledge—we can’t imagine not knowing what we know. All of these things are basically the same thing: starting from your own experience and projecting it onto others.
The second fundamental belief they use to illustrate is the idea that I make correct assessments. This belief gives us the bias blind spot—we see biases in other people but not in ourselves—or the hostile media bias, where partisans on both sides think the media is biased against them. If you believe that your assessments are correct, anybody who disagrees with you must be wrong or biased or both.
Now, they go on in quite some technical detail, but you can extend this idea yourself. One that sprang to mind for me is a belief that something like things are caused by people. The teleological bias. Children think that rocks are pointy so that animals can scratch themselves. Adults think that toast falls butter side down because the universe hates them. We see agency and intention everywhere. And this kind of thing explains biases like the fundamental attribution bias, where we assume people do things on purpose, not by accident, or the just world bias, where bad things happen to people who deserve them. We’re wired to see agents behind events. That much is well documented. So this belief, with belief-consistent processing, could explain a huge number of biases.
Just there, with three beliefs, we get three clusters that are starting to account for a big chunk of this endless list. Instead of memorising 200 deviations from a model that even economists are moving away from, you can ask: what’s the fundamental belief here? And is that belief serving me right now?
The simplest strategy wins
I could go on, but I’ll close it up for time’s sake. And I’ll close with an example that I think makes this really concrete.
In the late ’70s, there was this political scientist called Robert Axelrod, and he ran this tournament for robots. It’s based on a thought experiment and a common behavioural experiment called the Prisoner’s Dilemma, where basically you and another player have to either choose to cooperate and get a reward, or defect and you get a slightly bigger reward but your partner gets nothing.
The dynamics of this are well studied. What Robert Axelrod did is get everybody to create bots to compete in tournaments to find out who could win the most and what strategies would be the best to win these Prisoner’s Dilemma games. And the bots could be as complicated as their creators liked. They could have the most sophisticated strategies, complex decision trees, the works.
And the bot that won every single tournament was the simplest one there. It was called the Tit for Tat bot. What it does is it cooperates on the first turn and then after that it just does whatever you did last. That’s it. It didn’t think. It didn’t calculate. It didn’t follow a decision tree. Pure bias. Whatever happened last determines what it does now. Ignore all the rest of the information.
This bot didn’t win every round. Other bots could beat it in a match. But it won the most rounds across every tournament. The very simple biased strategy beat every sophisticated deliberative one.
And that’s the point. In a nutshell, the world is super noisy and bias makes the noise less distracting. Errors are bad, obviously, but errors come from both biased and unbiased thinking equally. And on balance, bias is a good thing. System One is a good thing. It’s just trying its best.
So don’t try and eliminate bias. You can’t, and you shouldn’t want to. But what you can do is notice when a bias isn’t serving you. And that means asking what the fundamental belief underneath the assumption is. Is “my experience is a reasonable reference” actually reasonable here? Is the idea that “I make correct assessments” actually true in this case? Triaging our beliefs like this seems like much more of a sensible strategy than trying to figure out which of your 200 biases might be leading you astray right now.
I’ll leave it there until next time.





