Hub Podcast

Are we better off without Facebook and Twitter?: Tech journalist Charles Arthur on the polarizing effects of social media

Senior campaigner from SumOfUs Flora Rebello Arduini adjusts an installation outside parliament in Westminster in London, Monday, Oct. 25, 2021. A 4-metre-high installation depicting Mark Zuckerberg surfing on a wave of cash was constructed outside parliament, as Facebook whistleblower Frances Haugen is due to testify to MPs on how the company puts profits ahead of public safety. The action comes after SumOfUs research revealed Instagram is still awash with posts promoting eating disorders, unproven diet supplements and skin-whitening products. Kirsty Wigglesworth/AP Photo.

This episode of Hub Dialogues features host Sean Speer in conversation with British author and long-time technology journalist, Charles Arthur, about his thought-provoking book, Social Warming: How Social Media Polarises Us All.

They discuss the divisive effects of social media, the problem of content moderation, and whether or not social media has been good or bad for society on the whole.

You can listen to this episode of Hub Dialogues on Acast, Amazon, Apple, Google, Spotify, or YouTube. A transcript of the episode is available below.

SEAN SPEER: Welcome to Hub Dialogues. I’m your host Sean Speer, editor-at-large at The Hub. I’m honoured to be joined today by author and long-time technology journalist, Charles Arthur, who is the author of the thought-provoking book, Social Warming: How Social Media Polarises Us All. I’m grateful to speak with him about the book and its key arguments, as well as his recommendations to address the excesses of the social media age. Charles, thanks for joining us on Hub Dialogues, and congratulations on the book.

CHARLES ARTHUR: It’s a pleasure. Thanks so much.

SEAN SPEER: Let me start with the book’s evocative title. What’s social media’s analogy to global warming? What are the similarities that you see?

CHARLES ARTHUR: I started thinking about this in the context of the 2016 U.S. election where the Trump campaign made very effective use of Facebook targeting particular states, and where the vote was tipped by tiny fractions—sort of something like 0.1 of a percent. It seemed to me that you were talking about a very small change having a big effect, and that is rather like we see with global warming where small changes in temperature can cause big changes and all sorts of other behaviour, all sorts of other effects. It’s rather like when ice turns to water. It’s only a small change in temperature but the actual phase change is very dramatic.

I started to wonder about the effect to which social media could have similar effects—whether you were seeing something where people who use social media a lot would change their behaviour, whether they would become more annoyed with each other, perhaps, whether they would become more angry, more radicalized. At the time, I was doing some work at Cambridge University looking at the effects of being online. I did some polling which found that people who spent more time online, who spent the most time online, were the ones who held the most radical political positions. This seemed to me an interesting fact.

When you then ally it with the way that social media puts those sorts of people in front of you, then it seemed to me that you have an effect that is very broad, where everyone is in effect contributing to it. As with global warming where you drive your car around and you think, “Well, it’s not having much of an effect, is it? I’m only driving to the supermarket,” but all these little incremental effects have an additive effect. In the same way, the use of social media is one of these things where every little bit of aggravation that you feel, or perhaps that you cause by some random tweet or Facebook comment that you throw out, all these things just add up and increase the social temperature in this way.

SEAN SPEER: A major argument in the book is that the social media business model has preferenced growth and scale and not concerned itself with externalities. What are its externalities? What, in your view, Charles, have been the costs that have accompanied social media’s exponential growth?

CHARLES ARTHUR: Wow, you can pick so many things but the externalities, as I see it, are people being more and more annoyed with things. People really taking issue, taking umbrage, with things that other people say, which actually is just something that’s going to happen. You get a lot of people online, then you’re going to be exposed to different views and some of those views you’re not going to like. That’s obviously different from one where you’re living in a city where you don’t know most of the people in the street. Or even if you’re living in a town, you do know lots of people, but you just don’t mix with the people who you disagree with.

For social media, there’s actually a business model which is predicated around keeping people engaged. The thing that engages us—there’s a scientific paper which I quote in the book which finds that the thing we pay the most attention to is stuff that outrages us. It’s things that fire up our tribalism and make us feel that what we’re seeing is something that should be rejected, that makes us feel more strongly part of a tribe, whatever that tribe might be. It might be a political tribe or it might be a behaviour tribe where we say, “Well, this is terrible behaviour, whatever this person is doing.” Or it might be a sports tribe, you like your sports team, you don’t like the other sports team, all these sorts of things.

The business model for the social media companies is to keep you interested, is to show you these things to keep you engaged by showing you things that will outrage you. The algorithms behind them don’t know that this is what they’re doing. They’re just saying, “Well, I show this to people and they spend more time on the website.” What they don’t realize is that the time they’re spending on the website is the time where you’re saying what this person has said, “What this person has done is ridiculous and stupid and I’m going to write a tweet, I’m going to write a Facebook comment, I’m going to do whatever, telling everyone how this person is stupid and wrong”. And all that effect, as I said, it raises the social temperature.

You see it with politicians who take more and more extreme views in order to get a reaction, in order to strengthen the people who are their tribe. But at the same time, this means that your politics becomes more extreme. Your politicians become more extreme. They start to move to the edges, to the fringes of the positions they might have held. If you look, for example, at how many Twitter followers or how many Facebook followers particular politicians have, you start to notice that the ones who hold the most extreme views are the ones who have the larger numbers.

The ones who are the centrists, the ones who are the compromisers, have fewer followers because they don’t say things that outrage people. They’re trying to mollify people and that doesn’t actually fit into the schema for the social media companies. All these things lead to us behaving in ways that are more extreme, I think, than would happen if we didn’t have those sorts of effects.

It’s a bit difficult to separate out and ask the question, “Well, what if we’d never had social media? What if Facebook hadn’t existed? What if Twitter hadn’t existed? What would our social experience look like for the internet?” There are actually examples in the book. I look at the example of Myanmar, also known as Burma, where you had a country that didn’t have any internet at all really until 2010 when suddenly you got smartphones, and what happened there was you had latent extremes. You had latent antagonism between ethnic groups, but once you had smartphones, once you had Facebook, then that really spiraled, it really took off and very quickly turned into a situation where you had what the United Nation classed as genocidal intent on the Rohingya Muslim population.

SEAN SPEER: You anticipated my next question, Charles. The book attributes to social media what you describe as a “vicious cycle of anger and outrage that is now spilling from the online world to the offline world.” I was hoping that you could unpack that idea, and in particular, the question is, is social media causing anger and outrage, or is it amplifying preexisting anger and outrage, or is it bringing together angry and outraged people, or is it all of the above, and if so, how does it do it?

CHARLES ARTHUR: Anger and outrage are very much part of the human condition. There was a time when they were actually survival mechanisms. There was a time when the human population was really very small and tribes were very important to survival. If you weren’t a member of a tribe, then it was likely you wouldn’t be able to gather enough food to keep yourself going for that long. You’re talking around the time of the most recent ice age and so on. At that time, being tribal was important, being a member of the tribe, behaving as a member of the tribe, was important.

Outrage is a very important mechanism for pointing out when someone is not behaving as part of the tribe. If someone is doing that, then you point to them and say, “Look, this person is doing wrong. We’re going to kick them out the tribe.” To be the object of the outrage was quite threatening, was quite a dangerous situation back in these prehistoric times really. That period has passed but we still have that imprint of tribalism and the necessity for outrage is burnt into our circuits.

What you then get with social media is that this longstanding instinct that we have gets amplified because it’s something that attracts our attention and so social media finds it convenient. Again, the algorithms that do this don’t know that they’re picking outrage, that they’re picking for these things. They just see it as something which functions to keep people on the website. There’s no intentionality in that sense, it’s an accident.

If websites were to be designed around something different, they’d work in a different way. You could look at something like, for example, the question website, Quora, which is not about keeping you on the website by showing you outrageous things, it’s about answering questions that people have. You don’t see any of these sorts of behaviours there. The way that social media works where it’s showing people opinions, where it’s showing people what’s going on, has this effect of selecting for the outrage and so it brings together people, yes, who are feeling outraged about something.

There’s this classic saying about Twitter which is the thing you don’t want to be on Twitter is the main character. You don’t want to be the person who everyone’s hating on today, and that’s very much a function of the way that outrage works is that you get to be the person who is put in the stocks and has fruit thrown at them all day. It’s a pretty unpleasant experience, I think, for anyone who experiences it. And yes, social media is amplifying that, it’s selecting for that, but it’s very much something that’s part of us as humans.

SEAN SPEER: That leads me to my next question. What is a scissor statement, who uses them, and why is social media particularly prone to its use?

CHARLES ARTHUR: Yes, a scissor statement is a fantastic phrase. It’s not my own, I’m afraid, which I’m sad about, but it was a guy who uses the pseudonym of Scott Alexander, who runs a blog called Slate Star Codex. He’s a psychologist and he wrote a sort of short story where he sort of imagined a computer system that came up with what he called scissor statements.

A scissor statement is one where any group of people who read it is immediately split into two camps. You can either agree with it or disagree with it, but you can’t be sort of neutral about it. The effect of that is, of course, that it means that the two camps are completely at odds. They can’t agree on it. Scissor statements work on social media to split the groups who read them and immediately create this tribalism, create this outrage, create the clash and raise the social temperature.

A really classic scissor statement I think is one such as trans women are women, which is absolutely—it’s one way you can either agree with it or you disagree with it. You there’s no midpoint. You can’t prevaricate around it. Another one that I was interested to see grow up just over the past year or is just the phrase, and it’s not even a sentence, of critical race theory. People hear that phrase and they immediately think they’re against it or they’re for it. They don’t even know what it is, but they have these strong feelings about it. That pretty much is mostly restricted to the U.S., although it came to the U.K. a little bit though most people don’t know what it is at all.

Scissor statements are this fascinating evolution of the way that language works and the way that language works especially on social media where you see them propagate around the network and people will find that they drive them to a position where they have to agree with them or they have to disagree with them. For social networks, for social systems like this, they become a meet and drink because they’ll become a focus around which people will argue endlessly because there’s no resolution. You can’t say, “Well, I think they’re a bit women, are they, or are they a bit not?” There’s absolutely no compromise. Scissor statements, in that sense, are absolutely the rocket fuel for social networks.

SEAN SPEER: Let me ask one final question concerning your diagnosis of the problem before we get into your thoughtful analysis about what to do about it. You talked earlier about the comparison to climate change or global warming and the idea of so-called neighbourhood effects whereby even if people themselves aren’t individually involved in a particular activity, the consequences manifest themselves collectively. It’s actually a relatively small share of the population that’s really active on social media.

There is some analysis about the percentage of the American population on Twitter, for instance, and even of those, the minuscule share that is actively engaged in political and cultural debates, and yet it seems to have had this multiplier effect beyond that small share of the population. Maybe just have you elaborate a bit on what are the causes of this neighbourhood effect of social media manifesting itself in the broader culture, even amongst those who may not be themselves active on social media.

CHARLES ARTHUR: Yes. A pretty good example of an effect of social media that reaches out beyond the people that use it I guess would be in the U.K., the Brexit votes, where a lot of people were reached through Facebook to vote for it. It had the biggest vote, I think, ever, and yet I think it was only something like 72 percent of the potential electorate actually did vote. You have a large number of people who didn’t vote and yet because of the mobilization on social media, they are affected by the outcome. As your politicians become more extremist, again, your political future is determined by that.

It’s because these media are seen by, to use a common phrase, they’re seen by the elites, especially the people who have power, as a way to A, to reach people directly without the intervention of the media, and B, as a way to influence what is going on. For example, in the book, I look at Ethiopia which is a country, again, that has incredibly low internet connectivity. Really, really low. You’re talking about only just above North Korean levels but where the people in charge were very focused on whether people had access to the internet, as in, they would cut it off if they thought that there was political mobilization going on there.

Where people from the diaspora abroad would use Facebook to promote themselves as potential political rivals to the existing rulers, and where it became very important to try to influence events through social media. Because there is, as you say, there’s this multiplier effect, which is if you can get all the small number of people in these countries who are using social media to follow along, then you start to have this cascade down where all the people who they’re in touch with in the real world directly will listen to what they say.

It becomes a lever that you can use. You’re standing in just the right place to move the world if you’ve got control of what’s going on through social media. Yes, they’re now seen as the right way to influence what happens, and controlling the message on social media, even just, literally cutting off the internet if you don’t like that message, has become a really essential move for a lot of countries.

I mean, sometimes that cutting off is actually done for good reasons. Say, for example, in Kenya very recently, Facebook, I believe, was actually cut off or certainly prevented from running political adverts because it was shown that it simply wasn’t taking enough care about what it did to vet those adverts. Kenya has had a lot of political violence relating to presidential elections and essentially, the Electoral Commission there decided that they’d had enough of what was going on with the way that the election decision making. The way that people were making their decisions was being influenced by unaccountable and yet very influential social media outlets.

SEAN SPEER: This conversation ultimately leads to the question, what should we do about these negative externalities? I want to spend a bit of time talking about the different options. Let’s start with content moderation. The book raises serious concerns about the self-regulation model which relies on social media companies to regulate their own content because it’s in conflict with their profit motives. What’s the alternative, Charles? What do you think policymakers should be doing when it comes to policing false or harmful content, and are there any models that Canada, the United States, the United Kingdom, and others should be pursuing?

CHARLES ARTHUR: Yes, it’s difficult, isn’t it? There’s an assumption that social networks and the internet are sui generis, that they’re a thing unto themselves, that they shouldn’t be compared in any way with things that have gone before. And yet, I’m not sure why that assumption is made. The fact that you can scale these things up really fast, that all they have to do is add another computer and you can serve a million more users is okay, that’s good, but actually, you do have to start thinking about the external effects that you have, the externalities.

When you had newspapers and radio stations and TV stations as your primary media outputs, they have responsibility for what they carried as adverts and as political adverts. It seems strange that because you’re a social network on the internet that you can simply ignore that. That even though you’re running something where the content is chosen, even though people are adding it—but especially for adverts, that’s something from which you’re making money. Why is it that you don’t have a responsibility and very close responsibility for what is produced there? More so for political advertising, because that has an effect on democracy.

I find the whole hands-off approach that Facebook has taken over political advertising, and especially fact-checking to it, to be really strange. It’s interesting that Twitter has refused to take political advertising for some time now. I think that there was an awareness there of the potential downsides, especially after seeing how it happened for Facebook. I think that that’s a good decision.

The trouble is, as you say, it’s expensive to do moderation but that’s not a reason not to do it. It is important to do and the problem is that as these networks get bigger, as they get arithmetically bigger, as they go from 100 to 200 users, say, the number of potential interactions goes from 1000 to 4000, or 4 million. It goes up geometrically. The difficulty increases much more quickly than the user base does. Your content moderation problem becomes much, much bigger, much faster. To that extent, it seems to me that there are almost sort of limits to how big these networks can or should get, which is the point at which they can’t moderate themselves at all.

SEAN SPEER: I’ll come to the question of antitrust and whether these companies have become too big, but if we can just stay on the subject of content moderation, what would you say to those, Charles, who’d argue that efforts to target misinformation risks capturing arguments or ideas that may challenge mainstream thinking, but that that doesn’t make them necessarily false or harmful?

I think for instance of changing views on the lab leak theory with respect to the origins of the Coronavirus, or even of all of the perspectives on issues of race, gender, or sexuality, which in the past would have been seen as transgressive or harmful. Is there a risk that content moderation overreaches and tilts in favour of greater intellectual conformity, and even undermines progress? How should we think about that trade-off?

CHARLES ARTHUR: There’s always that sort of risk of overreach, and the way you deal with that I think is to consider what it is that you’re trying to prevent. What you’re really trying to prevent is hate speech. You’re trying to prevent people from denigrating others in a way that essentially dehumanizes them. That’s the line that you don’t want to cross. Another line you don’t want to cross is one where you find people encouraging harassment of other people.

The marketplace of ideas is if you like, the whole lab leak versus natural origin thing for coronavirus, that’s one I think where once the argument is running along those lines, it’s fine. But when you get to the fringes of that argument, which is where people are essentially dehumanizing the Chinese scientists who worked in the Wuhan lab, then you can see where the limits of the conversation have to lie.

It’s about careful content moderation. It’s about knowing what your goals are. Your goals are that you don’t want to stop the discussion, what you want to stop is raising the social temperature by needlessly dehumanizing people, denigrating people, needlessly leading to people being harassed. I think those are fairly easy lines to draw, and when it comes to things like misinformation, again, that’s certainly a difficult space in terms of whether people are intentionally doing it or not. I think in that situation, it’s one where the social networks have the better view.

They can see the helicopter view of how wide-ranging is this attempt to push this line? Is it something that is being pushed by a lot of bots, by a lot of automated accounts, or is it something which is actually just coming from people who naturally share this view? In the U.K., for example, there were a lot of people who were against the quite strict lockdowns that we had during the coronavirus pandemic. You could argue that that was a bad thing for them to be going against it. At the same time, they were honestly held views and it was fine just to have the discussion about it.

The problem comes when you get people saying that one side or the other is fascistic or is evil or whatever. When you’re trying to, as I say, to dehumanize because that’s the step that you don’t want to be taking.

SEAN SPEER: We’ve talked a bit in this conversation about the role of algorithms. Elon Musk, as you know, has made the case for greater transparency when it comes to algorithms. Do you agree? If so, do you think there’s a role for government policy here?

CHARLES ARTHUR: Algorithms are the really difficult one because so much of what is going on now within social networks is all driven by machine learning systems. With those, you can’t really open the box and look at it and watch how the mouse gets from one end of the maze to the other. It’s almost as unknowable as asking someone how they reach a decision. It’s literally the way that people think. They don’t quite know how they do it, they just know that that’s how they’ve always thought and this is the way that they’ve reached this decision.

I think that it’s a slightly over-ambitious idea to think that people could look at the algorithms and understand them. To take what’s going on right at the moment, there are lots of illustration systems, AI illustration systems. There’s one called Stable Diffusion from a company called Stability AI, which is a downloadable package of about 4 gigabytes. It’s been trained on millions and millions of pictures from the web.

You can just about reverse engineer, or you can just about see how it is that it learns to draw a picture when you ask it to draw one. But you can’t quite see it, you can’t quite understand why it is that one prompt will make one picture and another probably will make a slightly different one, but not that different. When you’re talking about the algorithms that run social networks, you’re talking about something far more complicated so I just think that’s an impossibility.

You can tune these things. You can look for the outcomes. I think that it makes more sense to actually try to, if you’re really going to do this, to try to legislate around the outcomes, but even that is difficult. How do you measure engagement? You have to ask the companies, what is it that you’re actually trying to achieve here? Really, that’s what you need to focus on, rather than thinking you’re going to have an army of geeks who will puzzle over the algorithm if it’s been open-sourced by the government. I just think that’s totally unreal. It’s just not going to happen, for one thing, and it wouldn’t produce an output that you could use anyway.

SEAN SPEER: Let’s come to the option of breaking these large social media companies up. How might that help to improve the situation? Would a more fragmented market reduce or minimize some of the negative externalities that we’ve been talking about?

CHARLES ARTHUR: As I said earlier, it’s just natural that as these companies grow, as their user base grows arithmetically, the potential for trouble grows geometrically because you’re now bringing people together from all over the world. People who have distasteful views, views where they’re looking to get together and harass people. If you’re getting people from Tonga together with people from Canada, together with people from Estonia who all want to hassle someone who lives in Brazil, then they can. The bigger your network is, the more easily it can do that.

It seems to me that what you need to think about is actually limits on the size of these companies. Literally, limits on the number of users that they can have because when a network like that is small, it makes it easier to moderate, much easier to moderate, and the effect will tend to be less. You can usually say that if a network is well moderated and is not too large, then its effects will be minimal. You can have small networks that are not moderated whose effects will be very harmful as was seen earlier in August with the website Kiwi Farms, which has been hassling people for years but now seems to have been shut down.

There is an argument, I think, in favour as I was just saying, there are sizes beyond which you don’t allow the social networks to go. Arbitrarily, I think it’s about the size, perhaps of Twitter which is around 250 million users. For Facebook, that would mean it could cover the U.S. and Canada, but it would have to set up shop with a whole new network in Europe and a whole new network in the Pacific or something. It seems to me that the benefit of that is you don’t get the harmful externalities of people gathering together and being unmoderatable because it’s just impossible to do.

SEAN SPEER: You anticipated my final question, but I’ll put it to you still if, for no other reason, it’ll give you an opportunity to provide any closing thoughts or ideas that you want to impart to our listeners. Do you think social media has been a net negative for our societies? Would we, in your view, Charles, have been better off if they had never been created?

CHARLES ARTHUR: Well, speaking as a journalist, I’ve got to say that Twitter has been absolutely fantastic because it’s allowed me to find sources who I might not otherwise have been able to find. I probably speak for all the journalists in the world in saying that. In that sense, it’s been a net positive for me, but you can point to lots of things where it’s had really bad effects. I mean, not necessarily just Twitter, but Facebook in Myanmar, as I said, Facebook in Ethiopia, Facebook in Kenya even. These are countries where things have not been great.

I mean, even Facebook in the United States where groups got together with the Stop the Steal idea after the November 2020 election and that led to the January 6th insurrection. Even if parts of that were organized off Facebook, in other groups, the fact that you had so many people who were able to become, in effect, radicalized by this idea, is not a great thing for democracy, I don’t think. I feel that legislators are just about getting hold of this, that they’re starting to really realize they need to get some grip on what is going on, and that the networks themselves are doing the same. That they’re starting to realize that they’ve got a fire that they need to put out.

It’s really hard to say. We’re still in quite early days for these social networks, and it’s possible we’ve had all the good bits and we’re now in the so-so bits and that we need to take some actually before we get into the bad bits. Or it might be the other way around. It might be that actually what we’re seeing as all the bad things before we realize how we should use them, that we’re now in the learning phase, that we’re going to move into the good phase where everyone only uses Facebook in a good way.

It’s always better to be safe than sorry. I think it’s better to take action and to make sure than just to trust that everything will be okay because it doesn’t always work out right if you do that. Has it been a net positive, has it been a net negative? It’s one that I juggle between and I think that broadly for society, it’s sort of been okay. It’s sort of helped people stay in touch with their families but there have been these little pockets like I say, like you can point to Kenya, you can point to Myanmar, you can point to the insurrection and say, “Actually, that’s really bad, that’s really set as back as a society, as a civilization almost.” You’d say, mostly good, but not entirely good.

SEAN SPEER: Well, for our listeners, who are similarly juggling that question, I recommend they read Social Warming: How Social Media Polarises Us All. Charles Arthur, thank you for joining us today at Hub Dialogues.

CHARLES ARTHUR: Thanks so much, Sean.

Sign up for FREE and receive The Hub’s weekly email newsletter.

You'll get our weekly newsletter featuring The Hub’s thought-provoking insights and analysis of Canadian policy issues and in-depth interviews with the world’s sharpest minds and thinkers.