Below is a lightly edited transcript. For the article that inspired this one, see Values Don’t Matter.
Welcome to the btrmt Lectures. My name is Dr Dorian Minors, and if there is one thing I’ve learned as a brain scientist, it’s that there is no instruction manual for this device in our head. But there are patterns—patterns of thought, patterns of feeling, patterns of action—because that’s what brains do. So let me teach you about them. One pattern, one podcast, and you see if it works for you.
Now, I want to play a little bit today. I teach ethics—or at least the behavioural science of ethics—here at the Royal Military Academy, Sandhurst. And it can get a little bit tricky because a study of ethics should tell you how to be, what you should do, what you ought to do, what’s good. But often it seems like it raises more questions than it answers, particularly at work. If you consider that the people I teach, their job involves going out and trying to work out who is a lawful target—from shooters to women and children—you have an ethical framework that helps. They have the Law of Armed Conflict, which helps define what a lawful target is. But it’s not going to be particularly encouraging when you get the go-ahead to engage a child soldier. This is precisely how moral injuries happen, because this kind of ethical dilemma raises more questions than it answers.
Now, that is more of a work conversation, and we consider things very seriously there. But this isn’t Sandhurst—this is just Dorian’s little podcast. And like I said, I want to play a little bit today. So let’s move away from that heavy talk and concentrate on something that I’ve noticed in my time teaching the behavioural science of ethics here.
Everyone Loves Values
There is this one ethical framework I’ve noticed that everyone loves, either implicitly or explicitly. And once I tell you about it, I think you’re going to see it everywhere. And the thing that I like the most about it is it’s almost as useless as it is ubiquitous as it’s ordinarily deployed. What I’m talking about is the concept of values.
I noticed this when an old colleague of mine called me up for help. His startup was big enough now to start thinking about what their company values were. And as I thought about it, I realised that company values are actually something that all places where people collect seriously to do things have—from institutions and organisations to sporting clubs, or even the house rules in a D&D game. Anytime people come together to set out what kind of person people should be trying to be in a group, they’re establishing values. That’s the project they’re engaged in.
I’m going to give you a few examples of what that looks like. But the point is, across the board it seems like people really love values and they instinctively try to inculcate them in their groups. But repeatedly, it seems like people really struggle to implement them. They want them to work, but values don’t seem to work. And there aren’t a lot of easy answers as to why. Most people don’t have the kind of cash my colleague does to get someone like me to help them fix the problem. But interestingly, teaching ethics, I found out that ethics does have something to say—and it’s pretty low-hanging fruit.
So let me give you a few examples of values to really ground it, show you how they sort of fail, show you what ethics says, and then show you how, if you care how people coming together in a group behave, you can address that problem.
The British Army has a very good set of values, as you might imagine from an organisation like that. The British Army claims courage, discipline, respect for others, integrity, loyalty, and selfless commitment as their values. The Australian Army has an almost identical set, except that we add in excellence—you can take that as you will.
This is sort of a standard case. You won’t find that organisations stray too far from a set of values like this. My colleague will end up doing something like this, probably. If it’s a particularly snappy new kind of organisation who notices that organisational values end up mostly just being decoration, they might make them into verbs. So instead of courage and discipline, you’d end up with “be courageous” and “be disciplined.” The hope, if you’re the kind of leader who reads the Harvard Business Review or whatever, is that by making them doing words, you’re upgrading this historical project of value-making by making them easier for your people to do.
An interesting counterpoint is Tesco. I like Tesco because it was once owned by Bermudians, and as a Bermudian myself, I have some sort of loyalty there. It’s a grocery store here in the UK and it uses values that sound more like this: “No one tries harder for customers.” “We treat people how they want to be treated.” “Every little help makes a big difference.” This is sort of starting to evidence what I mean about how everybody wants values, but everybody also worries that nobody’s going to actually do them. Tesco’s basically gone for SMART goals here—look how specific these things are.
When you’re doing values in your non-organisational groups—your rowing club, your chess group—you’re going to go more for the Tesco kind of thing. You’re probably going to go for something that’s closer to rules than values, if we’re being honest. But the intent is the same. The house rules in a D&D group are going to include stuff like “don’t be a rules lawyer” or “be a turn-taker.” You’re talking about how people should be. You’re talking about values.
People love values. They put them everywhere they collect in groups. So the question becomes: if we have this instinct towards values, why don’t they work? And that is what I want to talk about today.
Values Are Just Virtue Ethics
Values are really just virtue ethics in disguise. Let me tell you what that means.
Ethics is one of the main branches of philosophy. I have sort of an ethics primer that explains things a bit more substantively, but essentially it’s the philosophy of how to be good. We all want to be good, but what even is good? What does good mean? And as such, how should we go about things to achieve that goodness?
Virtue ethics is a particular approach to these questions. And I think that virtue ethics actually make the most sense, at least to me, when I explain the other kinds first, to help you understand what kind of problem they’re trying to solve.
I think what you’d find is most people intuitively are consequentialists. We like to judge whether we’re being good or not by the consequences of our actions. You go about the place and you make decisions by deciding: is this going to hurt somebody? Is this going to help them? How do I do the least harm and the most good? Focus on the consequences. I think very close to our hearts, we hold this sort of consequential calculus.
But the problem with consequentialism is that consequences aren’t all commensurate. They don’t all have the same sort of cash value, so to speak. We’ll take one of the examples that you use in the lecture room—you’ll get this in an Ethics 101 course. Let’s say you’ve got a surgeon, and this surgeon just straight up murdered somebody, harvested their organs, and used those organs to save five other people. Numerically, this is a pretty good deal, right? One dead person, five people who would have died who are now alive. But we’re not really interested in the consequential calculus here. There’s something a little off about comparing the one to the five in that circumstance.
For a more realistic example, I like to think about the kind of ethical altruism groups that pop up all over university campuses. There’s this sort of way of thinking called longtermism that says if we just increase overall economic wealth, everybody’s going to be better off. Sort of like if you bring the average up, then you’re going to bring everybody up. You just concentrate on raising GDP, then just like the standards of living are better now than they were in the Middle Ages, in the future everybody’s going to be much better off. And the problem with this is there seems something very weird about preferring future hypothetical people with a really good economic profile over the suffering of real current people that we have to make policy decisions that hurt now to achieve that future state.
So this is consequentialism. I think it’s intuitive right up until it’s not. And then it’s really hard to figure out what consequences actually matter.
So you might not just rely on consequences—and you almost certainly don’t. The next one that people will bring up in an Ethics 101 class is something that we could call principle-based ethics or rules-based ethics. Take our surgeon before. Killing people might save lives, one for five. But on the principle of things, it’s not really that sweet to kill people, so maybe we should just not kill people as a rule. And even in cases where killing somebody could save five people, we just treat it as a blanket rule that killing is not appropriate. This is a principle-based ethical approach to behaviour.
Another example of this is laws. We don’t follow laws because they’re always perfect. We don’t follow laws because they’re always right. We follow them because on principle we believe in lawful societies. You don’t speed not because you think speeding on this highway surrounded by nobody is going to harm anybody, but because you believe in the principle of the law. You believe in the rules. This is called deontology, and it’s another approach to ethics—one of the main three, along with virtue ethics, that people will teach you in a basic Ethics 101 course.
There are actually a lot of others and they all try to fill the gaps where the others fail. We’ve already talked about how consequences fail—not all consequences have the same value, so it’s hard to measure them against one another, particularly in edge cases. And littered throughout my example of principle-based ethics, we had how laws may or may not be right, but we follow them on principle, so we know that these things fall down.
People have come up with other approaches to try and fill the gaps. The one that I like to use here at Sandhurst is called care ethics. I like to use care ethics because it’s a feminist ethics, and I get a sort of satisfaction teaching feminist ethics at an institution like Sandhurst. But I think it’s very poignant because care ethics talks about the ethics of care. If your grandma’s sick, you’re not going to be focused on speeding laws. You’re not going to be focused on the consequences of ditching your dinner date to go and be with your grandma. You’re going to drive as fast as you can to help her. And that’s a care ethic—because here you are prioritising your loved ones over everything else. That is a type of ethic. It’s a value that you hold close.
Virtue ethics, back to our main topic, are an attempt to shift the question from what these other frameworks are trying to answer. Virtue ethics aren’t asking “what should I do?” What are the consequences of this? What do the laws say about what I should be doing? Do I love this person enough to take these actions? It’s shifting the question from focusing on the moment-to-moment decisions to the kind of person you should be trying to be.
The idea here is that understanding what the right principles might be or the extent of the consequences of our actions might be—that’s hard and we’re not likely to get it right all the time. So maybe we should try and focus on being good people instead. We like good people. We don’t mind when good people make mistakes because we know that their hearts are in the right place and we think they’re much more likely to do good than bad. So maybe it’s better to try and be one of these good people rather than try and figure out what each of our actions should be, because we’re much more likely to do good than bad.
So we’re not asking what we should do, like principle-based ethics or consequentialist ethics or even care ethics do. We’re asking: what kind of person should I be? And then hopefully you, as a logical consequence, are just going to do more good things. More or less. That’s virtue ethics. I think you get it.
And hopefully, if you get it, you’ve made the leap now from virtues towards values. Because like virtues, organisational values are often the desirable qualities of people. They’re aspirational about character development, just like virtues are. They assume that these things can be cultivated in the organisational culture and they’re explicitly about what good means. They’re presented in this manner because they’re difficult to codify. You can have codes of conduct that tell people what they should be doing—rules. You can have rewards and punishments that help people pay attention to the consequences. But it’s very difficult to account for every situation that’s going to make you act in the best interests of the customer. So maybe instead you should concentrate on trying to inculcate that as a value. The kind of people our employees should be, the kind of people our D&D players around the table should be trying to be.
Values are virtues, at least in this specific form. So there’s a little primer on ethics—virtue ethics trying to fill gaps, in fact trying to approach the whole problem of ethics from another angle. And this is the thing that we are so intuitively drawn to when we try and collect people together to do things. We want to put values in that help people understand what kind of people they should be in groups.
And this is a huge problem because there are two massive issues with virtue ethics.
The Indeterminacy Problem
The first problem with virtue ethics is what’s known as the indeterminacy problem. Even back when Aristotle was formulating virtue ethics as we know them today, he pointed out that virtues sit between two extremes.
What is courage? It’s kind of difficult to say what courage is, but we certainly recognise cowardice and we also recognise recklessness. So it’s not either of those. It’s somewhere in between the two things. Similarly with discipline—well, discipline isn’t just chaos, and it’s also not brittle rigidity. That’s not what we mean by discipline. It’s somewhere in the middle of these two things.
You get it, and if you get it, you might have already gotten the problem, which is: what is the point at which cowardice gets an upgrade into courage? What’s the line over which some act of bravery stops being brave and courageous and starts becoming negligent? It’s not very clear.
There was this recent—I mean, recent, the last 50 years or so—very influential virtue ethicist, Alasdair MacIntyre, who talks about this specifically in organisations. I’ll link to the book in the show notes because I think it’s interesting. For MacIntyre, virtues aren’t just different in degree, they’re also different in kind. So the problem I just identified is this sort of continuum—courage is somewhere between cowardice and recklessness. MacIntyre is saying that’s even worse because that differs depending on what it is that you’re doing. They have to be embedded in practices to make sense.
What courage means for a doctor, versus recklessness and cowardice, has almost nothing to do with what courage means for a soldier or for a teacher. When the British Army says it wants officers to be courageous—when I’m trying to teach them what that means—do we mean the courage of a frontline soldier, or do we mean the courage of a logistics officer? Or do we mean the courage of the officers that populate the recruitment team? Courage in these circumstances isn’t the same thing. And all of that is within even the same organisation. You have different ideas and different standards of excellence. You have these different standards of virtue based on the practices you’re engaged in.
So indeterminacy: a virtue is something that sits between two extremes, but those extremes and that middle differ depending on what it is that you’re doing. That’s the first problem, and it’s not the worst problem.
The Situationist Critique
I think the next problem is the worst problem, which is called the situationist critique of virtue ethics. This comes out of a broader area of behavioural science called the person-situation debate. I’ll put links to the Wikipedia in the show notes—I think this is one of the great Wikipedia reads.
Essentially, there are these people who pay particular attention to the fact that virtue ethics are all about character. What we want to do is embody virtues or values as traits of ourselves. And then they also notice that there’s this sort of troubling lack of evidence that traits are a thing. The vast majority of empirical evidence points to the fact that there’s very little, if anything, that is stable in the human. And rather, what seems to overwhelmingly drive human behaviour is the situation.
Now, that’s not to say that there are no traits. This could be something that’s an artefact of experimental design—how do you design a test to demonstrate how people behave under different circumstances without changing the circumstance? It’s very similar to the nature versus nurture question. The argument’s basically the same. These things are so tightly intertwined that it’s very hard to tease them apart.
And we also know that there is some stability. Personality is kind of a stable thing. And IQ is a pretty stable thing—not entirely stable, they can change, but they are stable enough that we like to measure them. They wouldn’t be interesting if they weren’t at least a little bit stable. So we’re not out of options for stable human behavioural attributes, but outside of these few things, there isn’t much else. And as a consequence, it’s kind of hard to imagine how something like moral character might be found nested within these stable kinds of traits, like personality or like IQ.
And then contrasted against that, there’s this handful of experiments started in the 60s and 70s—but they extend until now—that demonstrate that the situation can be made to annihilate the individual capacity for virtue. I’m thinking of Milgram’s electroshock experiments or the Stanford Prison Experiment. These are examples of catastrophic ethical leadership failings in which the situation led average people to—for example, in the Milgram electroshock experiments—shock somebody ostensibly to death in the name of science, or in the Stanford Prison Experiment, led undergraduate students to brutalise each other while simulating a prison.
And while most people—and I complain about this elsewhere—get the basic facts of these experiments wrong, it’s actually because they’re simplifying details that makes very clear just how influential the situation can be if we try really hard.
So a lack of stable traits in humans, measured against evidence that the situation really overwhelmingly drives human behaviour. While everybody is asking “what virtues comprise the best moral character?” or “what values should we be inculcating in our rowing club?”, the situationists are asking: is there even such a thing as character?
It’s not very heartening, and it should make you very worried if you’re the kind of person who’s trying to think about what values you want in your organisation. Thankfully, I wouldn’t be doing this little lecture if I didn’t have answers for you.
Design the Context
We haven’t painted a very flattering picture of the project of values or virtues or virtue ethics. I’ll summarise. We’ve known from the start that virtues themselves are a little vague—they sit somewhere between two extremes. But it’s not just that. They’re also located differently on that spectrum depending on what practice you’re engaging in. So it’s not this continuum from cowardice to recklessness. It’s this sort of moral landscape with many peaks and valleys where courage means different things depending on what you’re doing. And it’s very easy to get lost in this hilly terrain.
And it’s worse than that, because even if you manage to locate the peaks that you care about, people aren’t going to do anything about it. They’re not reliably going to display those peaks. The situation overwhelmingly drives their behaviour, no matter how committed they are to embodying the virtues you want them to embody. That’s what all the empirical evidence seems to suggest.
Now, like I said, this entire lecture was essentially prompted by an old colleague who called me asking how he could help his executive team with their new values initiative. And I spent about seven minutes describing what is looking like it’s going to be maybe 15 or 20 minutes for you before realising that he—possibly like you—probably didn’t really care about the background. Nobody ever does. It’s leadership consulting after all, not brain science. And you are probably listening to this on a drive to work. Instead you want sexy-sounding solutions that can help you in your group enterprises.
So I’ll give you the sexy-sounding solution, which is: if the situation is all that matters, if the environmental context is all there is, then just design that. The situation. The context.
This isn’t a new idea. MacIntyre, who I was telling you about before, makes it clear that institutions need to create structural opportunities for action. We have other ethicists too. John Doris talks about empirically informed, context-sensitive approaches to ethical behaviour. Maria Merritt talks to something similar, but she speaks more to how having an attitude of humility can help you identify that context dependency.
I don’t actually think that you need to be so highfalutin as all that. You could read those people—you’ll get a lot of good ideas—but you could also just be very straightforward about it.
Come up with values. But if you want people to do them, first of all, you have to solve the indeterminacy problem. You have to come up with values that you can then articulate to show people what it means. It’s not enough to tell people to be courageous. It needs to be clear to them what that means.
Google, for example, famously dropped “don’t be evil” from their manifesto. That seemed like a real problem, but actually having it wasn’t useful at all. If you’re a software engineer working at Google, you know that everybody’s sort of upset about all this attention-stealing algorithmic behaviour that’s going on, but it’s not really clear why it’s bad other than people don’t like it. It’s not corresponding to any particularly obvious trends and negative outcomes for people. So it seems like stealing your attention is evil. But equally, nobody likes the non-algorithmic alternatives. Nobody liked RSS, and that was around for as long as Google has been around. I bet most of you listening right now don’t even know what RSS is. So what does it mean for a software engineer working for Google to not be evil? It doesn’t make any sense to include it if it’s not clear what that means.
In contrast, surgical teams have this very clear. They call them “speak up” protocols. Anybody in the surgical theatre can call a halt, often with an exact phrase. And it’s often operationalised at certain points—you’re supposed to call it at a certain point in the operation if you notice something’s wrong. So here courage is actually operationalised in practice. You have to show people what it means.
You can also do that by hijacking the human tendency for conformity. This is something that I teach at Sandhurst. People don’t like the idea of conformity, but it’s actually very useful because when we don’t know how to behave in groups, we conform to solve that problem. So just have influential people model those virtues.
There’s this sort of quote from an Aussie general that you hear around the halls of the military academy here at Sandhurst and also back home at Duntroon: “The standard you walk past is the standard you accept.” It sounds kind of trite, but it’s true. Because if senior people cut corners—if the leaders of a group cut corners—everybody else is going to as well. So figure out how you want people to behave and show people how to do it. They’ll conform to you if they don’t know how they’re supposed to behave.
That’s one thing you can do, but the other thing you can do is identify what situational factors need to be present to motivate people and design the environment to encourage this. Now this is classically called choice architecture, and I’m kind of critical about this elsewhere. But it does work in certain circumstances.
For example, if you put a cheap bottle of wine and an expensive bottle of wine on your wine menu next to the bottle of wine that you want people to buy, then they’re going to buy that—because most people don’t take any pride in being needlessly cheap and they don’t take any pride in being stupidly reckless with their money. So they’re motivated to buy the medium-priced bottle.
There are heaps of examples of this and heaps of examples of models of motivation that you can use to help you work out what will motivate people to behave a certain way in your group. Open-plan offices is a good example—it’s a good example executed poorly, because I think there is evidence to suggest that it’s not really very good for productivity or people’s wellbeing, but they certainly force a certain kind of behaviour. They force something other than private conversation. Or maybe a better example is putting hand sanitiser at eye level. It makes people use it heaps more. So you’ve got to put the structure in for people to behave how you want them to behave.
And then the last thing you can do is just skip all of that and concentrate on what people believe. Because people really only pay attention to what they believe is important. This comes out of work that I used to do before coming to Sandhurst as a brain scientist. Attention is fundamentally belief-shaped. There’s a lot of evidence that we just don’t even notice things—and it’s not that we’re ignoring them, it’s that we don’t even perceive them if we don’t believe they’re important, if we’re not expecting them to be important.
There’s a fantastic example I’ll link in the show notes. It’s a game where you have to watch basketball players play basketball and count the number of times they pass the ball. I don’t want to spoil it, but go watch it and you’ll see what I mean. So you can concentrate on their beliefs and help them pay attention to the things that you care about.
A good example of this is checklist culture in aviation, because pilots really believe that checklists save lives. So they use them even when they’re feeling particularly cocky or even when they’re tired, because they believe in it as an enterprise. Alternatively, it’s not going to have any effect having “customer first” values if the only messaging your salespeople get is that numbers are the priority. So you’ve got to help people understand and believe that the virtues on the table actually matter.
Design not so much the virtues but the context in which those virtues sit. Choose them. Choose your values. But don’t spend too long waiting for people to adopt them. You have to design the context to help people along. Otherwise your virtues, your values, they’re hardly going to matter.
I’ll leave it at that.





