The following is the latest installment of The Hub’s new series The Business of Government, hosted by award-winning journalist and best-selling author Amanda Lang about how government works and, more importantly, why it sometimes doesn’t work. In this five-part series, Lang conducts in-depth interviews with experts and former policymakers and puts it all in perspective for the average Canadian.
This week’s featured guest is Michael Hallsworth, the managing director of the BIT Americas, a leading behavioural insights consultant which grew out of a unit of the U.K. government in the early 2000s. He’s also the co-author of the book Behavioural Insights. He joins the show to discuss the politics and policies shaped by nudge thinking and the use of behavioural psychology to shape government and public sector outcomes.
Read Amanda’s accompanying column on this topic here.
You can listen to this episode of Hub Dialogues on Acast, Amazon, Apple, Google, Spotify, and YouTube. The episodes are generously supported by The Ira Gluskin And Maxine Granovsky Gluskin Charitable Foundation and The Linda Frum & Howard Sokolowski Charitable Foundation.
AMANDA LANG: This week, a look at the politics and policies shaped by nudge thinking, or the use of behavioural psychology to shape outcomes. Nudges have been used successfully for decades now, but are they being used as well as they could? And in this ever-polarized world, is a nudge a little too delicate to get the job done? Michael Hallsworth is managing director of BIT Americas, a leading behavioural insights consultant which grew out of a unit of the U.K. government in the early 2000s. He’s also the author of the book Behavioural Insights. It’s so good to have you for this, Michael. We really appreciate your time.
MICHAEL HALLSWORTH: My pleasure.
AMANDA LANG: Michael, I want to start with Nudge, which got a lot of attention, obviously, including really important economics prizes in terms of how it could be used by government. Where are we with it? We know it’s alive and well, and there’s definitely some recent evidence of its use, including in the pandemic. Characterize where it is on its journey.
MICHAEL HALLSWORTH: Yeah, sure. So the book Nudge came out in 2008, and it really sparked a lot of interest in this field. But you have to remember there’d been interest building in other senses as well. So governments have been increasingly receptive to use behavioural science, the financial crisis had made them look for other options, as well, about doing things, particularly ones that didn’t cost a lot of money. And that meant that there have been a broader trend to being interested in behavioural science. Nudge was really the catalyst, that sparked the flame, and it got a lot of attention. But what’s been clear over the last 15 years is there’s been a behavioural science, behavioural insights, movement that has been applied in the public sector that is broader than the concept of nudge. And it’s just worth being aware of this because the idea of nudge is one where you basically set up options, so people are more likely to choose one that benefits themselves, but they’re free to choose otherwise. It’s created to appeal both to left and right. But it also has certain limitations. It says that legislation, taxation, they’re off the agenda. We don’t look at those. Now, of course, what’s happened is that there’s been a broader application which says, “Well, behavioural science is a body of knowledge, a way of doing things, a way of understanding human behaviour that can be applied, for example, to improve the way taxation works, improve the way legislation works,” rather than saying, “It’s nothing to do with that.”
So where we are right now is we’ve had a tremendous amount of success. There’s been a lot of attention, a lot of investment. There’ve been 300 separate institutional teams set up around the world to draw on this body of knowledge. A kind of ecosystem has grown up. What I think now is we are moving into a new phase maturity where we know, roughly speaking, what kind of size of improvement we can get from a certain type of intervention. We also know that we need to broaden out the way we’re thinking about behavioural science rather than just thinking about nudges and the fact that people often react in non-conscious ways. There’s a broader agenda here around people’s goals, motivations, and so on that we can take into account. And so I think we are moving into that phase. We’ve made the case for behavioural science. People have shown and know it can make a difference. And now it’s a question of how much difference does it make? What’s the right option, in which circumstance? How much difference does it make for different populations? That additional level of sophistication and maturity is where we’re at now after about 15 years.
AMANDA LANG: And I think that’s a really great distinction to make, especially because I think a lot of us have maybe retained a more simplistic view of how behavioural science can be used. Is it safe to say that there are now—and I’m going to over-simplify a little bit here, so forgive me—but is it safe to say that there are now almost two categories, and one would be behavioural science as it relates to the user or consumer, or recipient of information or policies, but also this growing body that I think you might be part of, of how you use behavioural science and the understanding of it to build complex systems, of which governments are one, right? Internally, departments are super, super complex. And which of those two do you think is the more—I hate to say important, but where is the fun work being done?
MICHAEL HALLSWORTH: There are a few different ways of cutting this, and you can think about the emphasis on individuals and systems, or you can think about the emphasis on policy or government itself. The flexibility is quite helpful, I would say. So a few different angles. I think there’s a massive, underappreciated opportunity here to improve the way government works itself. And about five years ago, I did a whole report, set of recommendations on how you might do this called behavioural government. And it basically took the point that in behavioural science, behavioural policy, nudge, we’ve been very much talking about how does government do things, if you like, to people in a very simplistic sense. How does it set policies up? But actually, policymakers, government departments, they use the same mental shortcuts that we all do. They’re not immune to that.
So governments are over-optimistic in terms of their plans. Large projects go over budget. There is groupthink. When people are in the room in small groups, they tend to reinforce what each other is thinking, and decisions become more extreme. Some people say, “Oh, that means that government should just get out of the business of trying to influence behaviour because it’s not perfect.” I disagree. I think there are ways you can build institutions differently, change the way they work to make these kinds of, if you like, biases less likely to happen. It requires an institutional approach, though. Like, for example, building in break points where you say, “Let’s reassess the assumptions.” There’s an idea called pre-mortems, where instead of a post-mortem, where you work out what went wrong, you try to work out what could go wrong in advance. You give people the license to think about those doubts in the back of their minds and bring them out.
AMANDA LANG: So one of the things—this is a little tangential, but I feel like it’s important to introduce this concept fairly, and that is; I was looking at some data on how journalists, the media, anybody who should be in a quasi-critical role of how policies are developed and implemented have been interestingly tilted in favour of the use of behavioural techniques and psychology, nudge, if you will. And we could say, “Well, that’s because it’s all common sense,” but it can also be alarming that it feels like a smart thing for governments to do, and that therefore we’re maybe a little less helpfully critical about how it might be abused because, of course, it’s ripe for abuse. Do you think we’re in the right place on how we think about it in terms of how benign we assume it will be?
MICHAEL HALLSWORTH: Really great question. So a few ways of thinking about it: I cannot obviously control what everyone else does with behavioural science. If the genie’s out of the bottle in a way and people will know that these kind of ideas exist, and so like many tools, they can be used for better or worse purposes. I do know that the field of behavioural science has been thinking about ethics and what are appropriate ways of dealing with that. My view is that you can’t deal with that kind of stuff in the abstract. You need a framework that allows you to work through the case in front of you and work out the costs and benefits involved and—do people have a strong intention in a particular instance that needs to be respected, for example? So I think that’s a real concern. I think part of the reason that people haven’t maybe been as critical as we wondered if they would be maybe at the start of all this is because they’ve actually seen that a lot of the applications by governments have been fairly uncontroversial. They’ve been things like helping people to pay their taxes on time. They’ve been more banal things that are really important in terms of making government work, but they haven’t strayed into those more controversial areas because I think some of the guardrails have been there.
There are bigger concerns, you might say around, for example, tech companies have a vast amount of data with a lot of precision. They also can employ what’s known as dark patterns. So this is around building your online environment to encourage you to spend to keep you hooked, and so on. Now, there’s very little awareness of that because the data and techniques are often held by private companies. And I think there is increasing interest in that because people know that behavioural science can be powerful. It varies by context, but it can be, and it can be used for these purposes, which maybe people didn’t sign up for. So it’s a really complex area right now. I think there are reasons it can be dangerous. I think in the past people have been positive about it because it’s been less controversial, but that doesn’t mean it won’t be in the future.
AMANDA LANG: Well, I guess one place where we do see, and it’s muted criticism relative to the way temperatures have been around this subject. But we know that the U.K. government, for instance, and others did use some behavioural psychology around vaccine mandates, around health mandates in the pandemic. Do you see some of those as an appropriate use? Because I guess what I would say is an—the dovetail question that I would ask is whether the danger of trying to use some of these understandings is an oversimplification. Because in order to fit them neatly into a proposition, you do have to distill things down to very—now, maybe the public needs things simplified, and we can’t actually lay complex problems before a broad citizenship and hope that they come to the right answer. I don’t know. But we certainly didn’t see a lot of that. We saw oversimplifications of big, complex problems during pandemic. Is that the only way to do it? And does behavioural economics lend itself to that?
MICHAEL HALLSWORTH: So I’m not actually aware of behavioural economics related to vaccine mandates or anything. A mandate really can be just a very simple use of the power of the state, if you like. It doesn’t require any behavioural economics input. What I do think is that over the course of 10, 15 years, there has been a drive to increase understanding of behavioural science in government by me and others because we’ve taken the view that if you want evidence-based policy, there is evidence about human behaviour, and it makes sense to people to know the right evidence that is actually accurate as opposed to assumptions. Now, in that process, you may have to simplify. Like if I say to you, most people are influenced by what others do in the same situation. This is like a social norm point. Have you considered social norms? I am simplifying here. Sometimes people aren’t. It depends what group you’re referring them to. If that’s a group they admire or one they really don’t like, you can have opposite reactions. Now, if I go around saying that, what may come through is the quite basic understanding that you should do this. And that if misapplied or applied in the wrong way with good intentions can lead to a worse outcome. So I think there’s something in that. I think the broader awareness has been achieved, but that doesn’t mean you are automatically going to get a good outcome. So I think that is a risk. I think overall the benefit’s been positive—you’ve got a positive net benefit, but those risks are there.
AMANDA LANG: One thing that did occur to me is that, as we see increased use of machine learning technologies, artificial intelligence, married up with some of these tools, and they are tools, and some of them have been quite well developed. Is there a new wrinkle in this puzzle? And in fact, do people like you even worry about losing control, I guess, of how these things evolve?
MICHAEL HALLSWORTH: So I definitely think there is a question here. And I think it goes back maybe to the ethical appropriateness question. My concern is that a lot of the time, machine learning has been spoken about in a technocratic way. Look at the new things we might be able to do. We can combine behavioural science with machine learning to create targeted interventions that are really effective, or we can test to see how effective they are. But there is, I think, a prior question about what are the bounds of acceptability here. What do people want? How do you prevent it being creepy? How do you prevent it being inappropriate? Because we’ve tended—I mean, governments have tended to offer services at scale for everybody. What level of personalization are we comfortable with from government, which is funded by general taxation normally? I don’t think there is a technical answer to those questions. I think it’s a political one. But I don’t know who’s having that debate, to be honest with you. And I think it is an important one. For example, in many countries, men pay more than women for car insurance, and we accept that. But if you were to look at other demographics or characteristics paying more for something, you wouldn’t feel as comfortable. So this feels like a massive, unknown debate that should be had really before you start thinking about, “Let’s do these really targeted interventions.”
AMANDA LANG: Do you believe in a philosophical way that when we think about just using, I’m going to keep saying nudge, but I know that it’s bigger than nudge; it’s more complex than that, but using behavioural tools and psychology, it lends itself to a big government mentality? Is it antithetical to say, “I believe in a small, lean government that will leave me alone much of the time, but when I do relate to it, it may be using some of these policies?” How do you see that kind of tension?
MICHAEL HALLSWORTH: I don’t think that’s necessarily true. And I would say that rather than seeing behavioural science as a tool necessarily, I think it’s better to see it as a lens through which you see actions. And I mean this because I think it also can lead you to reassess things. So it may lead you to say, “Well, actually, the solution here is not to have an intervention.” Actually, that may not be the best thing, or it may lead you to realize, “Actually, an existing intervention we have, it’s counterproductive. Now we think about how it’s actually going to influence behaviour.” Or it may lead you to say, “Rather than trying to change behaviour, let’s design a policy around people’s existing behaviours.” I also think there is a strand of the way behavioural science is being applied, which is about making sure that public money is used wisely or effectively. Because the way this has grown up has been around testing things. We can’t assume, given behaviour is complex, that something is going to have a particular effect. So we always try and test it and scrutinize it. And I think that’s a non-partisan approach as well, because it may lead you to conclude that something doesn’t work. And I have been in situations where I’ve said, “This thing we tried didn’t work.” That is not the same as saying, “You just keep doing more and more and make your government bigger.” That’s like taking a reassessing approach to government. So I don’t think it’s a simple thing, as it means a bigger government at all actually. It can be used in various ways.
AMANDA LANG: And I guess, to your point, it can be used to really improve the existing delivery of services. Is it being used that way? I mean, is this one of the tools governments are using worldwide, governments that are thinking this way about trying to be better?
MICHAEL HALLSWORTH: Yes. So, for example, in the U.S., certainly over the last eight years as an organization, we’ve spent a lot of time working with governments to improve their ability to run rapid trials and experiments. This is through the What Works Cities program from Bloomberg Philanthropies, which actually is now expanding into Canada. It works with cities to help them use data and experimentation to improve the way they deliver services and understand what effects they’re having. So I do think it can be really integral. I don’t see this as a nice to have. I think it can be really central to the way governments work because most policy in most service provision is intended to have some kind of behavioural effect. You want people to put their bins out at the right time, if you’re a city. You want people to pay their taxes on time. Focusing that way, understanding what the effects are through testing and through even doing strategies that take this stuff into account is the core thing that government should be doing, in my view.
AMANDA LANG: So one of the more depressing facts I learned about behavioural psychology when it came to personal finance is that education—teaching people more about personal finance, which has been the goal in many people’s worlds for a long time, then they’ll get better at it, we think—isn’t actually one of the most effective tools. Even when people know more about it, they still don’t behave better about it, which is to me quite depressing. Maybe, given everything you know about human beings, you take that in stride. But that kind of reality about our human limitations to me says, “If we take this to its extreme, if we continue to build out systems that really respond well to how limited and fragile and vulnerable our own behaviours and brains are, the way our brains actually interact with our world and experiences, we actually will get further and further away from empowerment that has to do with our understanding of why we’re doing things.” I know that’s a very extreme view, but do you see that as a continuum where it leads people away from real agency?
MICHAEL HALLSWORTH: I actually don’t think that because I don’t think the distinction is as clear. So for two reasons. One, behavioural science can be used to improve education. So this is the idea of, if you like a boost, which is framed as an alternative to nudge. Although, it’s a range of options you choose from. And that’s where you teach people how to use mental shortcuts or develop mental shortcuts so they achieve behaviours that fulfill their goals. So it might be, for example, using entrepreneurs who are just starting out, helping them use simple rules of thumb, mental shortcuts, about organizing their accounts. And that has been shown to work better than traditional education. It’s harnessing the processes that behavioural science uncovers. And the other thing I would say is that, if you expand on that, behavioural science can be used to empower people more generally. If people are aware of actually what produces a certain behaviour, they can invest in that. So if you find that, for example, redesigning your environment in small ways is much more effective than relying on willpower, I’m not disempowering you by telling you that fact. In fact, I would say the opposite. I’m helping you do something more effective by taking into account the fact that we tend to overestimate how we will resist temptation, for example, or we tend to overestimate our ability to use willpower when we’re confronted with different options. That, for me, is empowerment. Helping people understand the role of how behaviour comes about from them. And then addressing that.
AMANDA LANG: I guess I’m thinking more of an example, and I think it’s a real-life example; maybe not. But it’s one that I’ve seen often of making the glasses smaller in the bar so that people drink less unwittingly, and you’re actually charging them less, you’re not stealing or anything, but you are changing their behaviour without their permission or without their knowledge. There are some of us who would say, “Well, that’s great because I didn’t notice. And so I guess I don’t need as much.” But there are others who will say, “I didn’t make that choice.” And they just rebel against that. And I feel as though that category is pretty big these days of people that—certainly in our country, we feel, and the U.S., I think, that’s safe to say, there are people who simply don’t want official agencies, whether that’s the government or other bodies of authority, telling them what to do. And they certainly don’t want it happening without their knowledge.
MICHAEL HALLSWORTH: Well, I think the first question is: Who is we? Because what definitely has happened is that portion sizes have increased over the last 40 years. Now the question is: Were people consulted? What form of consultation would you like? It’s more like the market decided that people wanted to buy it, advertising stimulated demand, and so on. So there’s a real question about who is making the decision here. And is it like, what level of scrutiny do we have on one side or the other side? Because if you are saying that maybe bars are offering smaller glasses and charging less, I think that’s just the equivalent of them offering bigger glasses or offering a bigger burger. I found out recently that Wendy’s introduced the triple burger to get people to buy the double burger, and it was a massive success. In behavioural science, it’s called the decoy effect. So this kind of stuff is happening all the time. When do we react badly to that, or when do we just say, “Well, that’s just more choice or a different kind of choice?” You are right, though, the reactance, as we call it, like being really annoyed at being deprived of a choice, is a very real thing. And it’s not always predictable about when people are going to get really annoyed about it. That’s why I think it’s really important to try and test some the stuff in advance on a small scale, get people’s reactions rather than just assuming you know because we can’t always predict this. And you’re right, it’s a danger, and it’s a problem we need to be aware of. We need to see the world as it is.
AMANDA LANG: I guess on that front, I’ve wondered whether the political polarization—you alluded to this earlier, and I won’t know the right behavioural science term for it, but that we do identify with groups seems to be a problem that needs to be addressed. If we’re trying to help people get to certain places or make certain decisions, when the group identity politics is as strong as it is today, are there fixes for that? Are there things you think that governments are actually doing and thinking about to try to help reverse that a little bit?
MICHAEL HALLSWORTH: Really great question. There’s no doubt that the post-rationalization of stuff is really powerful. There are other experiments showing that you are less likely to solve a simple math question if the answer goes against your political preconceptions, like you’re just—
AMANDA LANG: Terrible.
MICHAEL HALLSWORTH: It’s very powerful. Now, how do you deal with some of those things? Well, there have been some successful attempts to bridge groups. There’s a really amazing study done by Betsy Paluck at Princeton around using the power of soccer/football to bring together Israeli-Palestinian children. In an evidence-based way, bridging those kind of potential divides, you can create new groups that, although they’re very contingent, can actually be effective. You can also have things called deliberative forums, where people get in there and rather than just argue against each other, they have to engage with some kind of evidence. Over time, people’s opinions do begin to shift as they get further under the surface. And sometimes just asking people to explain how they think something works can lead people to pause and reconsider. Because it actually turns out that you and me and everyone, we can find this really difficult to actually explain something. There’s this thing called the illusion of explanatory depth where we think we know how things work, and then when we actually try and work it through, we go, “Okay, interesting. I actually don’t, so let’s—” That could lead to some openness. So there are pointers, but it’s a really tough area.
AMANDA LANG: Well, I’m curious to know whether you think, given how much is going on still in this field and how, to your point, and I know you wrote a manifesto about where this science can go and things that people should be thinking about. And to me, there’s a lot of optimism that we can bring to what this might do for us, understanding our human frailties, I think, our neurological frailties seems like a really good place to focus when we’re trying to make things work better. Can we be optimistic that that will actually help? This is a political question almost, so forgive me. But help us out of the place we’re in, which is it seems sometimes like a very dire negative place of polarization, of conflict, and oppositional behaviours by groups. Will our own self-knowledge help us out of it?
MICHAEL HALLSWORTH: So my answer would be that it’s not just about understanding frailties, it’s also about understanding strengths. The rapid and non-conscious way we make judgments can also be very effective at helping us navigate our lives. That’s the first thing. The second thing is I think it can help us if we—it helps us redirect our energies to the stuff that might actually influence behaviour. So what I mean by that is the kind of organizations in which we make decisions—the processes that, without even realizing it, take us in certain directions—that, I think, offers some hope. There’s a lot of work that’s been done around something called conversational receptiveness as well, by Julia Minson at Harvard. And that offers a route, for example, that is about having more effective conversations. So rather than just, I give my opinion and I think you’re wrong, and I’m going to try and convince you of it by hammering into your head, conversational acceptance is about saying there are certain phrases that can open up the conversation and prevent that defensiveness that will come about by someone challenging my whole identity, and I’m going to react by just putting up the defences. Now, I think that’s a bit of light. I think there are ways you can have a better conversation if you understand what leads to that defensiveness occurring. And it can be small things, which also means small things can point you in the opposite direction. So I think there is some hope from the behavioural sciences, but the part of the solution has got to be just the hard political ones of de-escalating some of the polarization as well.
AMANDA LANG: So just bringing it back then to governments and where they are in their thinking and evolution of this, that seems like a really useful place for the political side of government to be thinking. Of course, they may feel they benefit more from some of the polarization out there, some of the conflicts. Do you think governments are in a good place in terms of how they’re using behavioural sciences?
MICHAEL HALLSWORTH: I mean, that’s a really big question. I think it varies. It obviously varies. I think that what has definitely happened is the base level of awareness about some of the main insights has increased. So I think people will have more pause, rather than just putting together really poorly thought-out policy. I think one criticism is that maybe we still haven’t got into the really big policy issues and trying to influence those through behavioural science. And I think that’s fair because I think a lot of the time, we’ve focused on changing specific aspects of how things are done so we can show that they had an impact. Like, you change this, you get that impact. But of course, a lot of these big problems are embedded in really big complex systems. I think there is a potential here from behavioural science because you can focus on particular behaviours in a system that can then spread change in unpredictable ways throughout the system. And I think that’s a way forward, which I really would encourage governments to embrace. And I’ve set out how they might do that in really practical ways.
AMANDA LANG: Thank you so much for joining us.
MICHAEL HALLSWORTH: Thank you. It’s been a pleasure.