Visit techpolicyinstitute.org for more!
Feb. 25, 2021

Paul Barrett on the False Claim that Social Media Censors the Right

Paul Barrett on the False Claim that Social Media Censors the Right

Paul Barrett is Deputy Director of the Center for Business and Human Rights at the NYU Stern School of Business. He joined the center in September of 2017 after working for more than three decades as a journalist and author, focusing on the intersection of business law and society. Most recently, he worked for 12 years for Bloomberg BusinessWeek, and prior to that from 1986 to 2005, he wrote for the Wall Street Journal. He is the co-author of a recent publication from the Center for Business and Human Rights titled “False Accusation: The Unfounded Claim That Social Media Companies Censor Conservatives,” obviously a very timely subject.

Liked the episode? Well, there's plenty more where that came from! Visit techpolicyinstitute.org to explore all of our latest research! 

Transcript

Tom Lenard:

Hello, and welcome back to TPI’s podcast Two Think Minimum. It's Monday, February 8th, and I'm Tom Leonard, President Emeritus and Senior Fellow at the Technology Policy Institute, and I'm joined by Scott Walston, TPI’s President and Senior Fellow. Today, we're delighted to have as our guest Paul Barrett. Paul is Deputy Director of the Center for Business and Human Rights at the NYU Stern School of Business. He joined the center in September of 2017 after working for more than three decades as a journalist and author, focusing on the intersection of business law and society. Most recently, he worked for 12 years for Bloomberg BusinessWeek, and prior to that from 1986 to 2005, he wrote for the Wall Street Journal. He is the co-author of a recent publication from the Center for Business and Human Rights titled “False Accusation: The Unfounded Claim That Social Media Companies Censor Conservatives,” obviously a very timely subject. Welcome to Two Think Minimum, Paul. 

Paul Barrett:

Thanks. Appreciate you inviting me to participate.

Tom Lenard:

I guess given the title, one doesn't have to read too far to find out what the bottom line of the study is, but there's obviously a lot more to it than that. So, if you would, can you tell us what you attempted to do in the report and what evidence you gathered?

Paul Barrett:

Sure. On the right, it is now conventional wisdom, an article of faith, that the major social media platforms censor and suppress conservatives. I set out to examine that claim, which had become increasingly prevalent and a part of a variety of debates ranging from sort of the technical wonky end of the spectrum, whether or not we should amend or revoke Section 230 of the Communications Decency Act, and at the other end of the spectrum, whether conservatives are being marginalized or canceled or censored in all aspects of American society. So, this claim that the right, the political right, is censored in social media, it has become sort of one of the central examples and central threads of that very broad claim made by everyone from former President Trump on down. The way we went about examining it was first to search for any definitive study that had been done that addressed the question and found none. Given the paucity of data that we have from the social media companies about individual decisions to take material down from their sites, I imagine it would be very difficult for a social scientist to study that question. 

And also thinking about it another way, it's difficult to prove that something's not going on at the same time. So, having determined that, we moved on and looked at what data we could find that bears on the question, even if only in a fragmentary way. And we looked at, for example information that's available from data analytics outfits, like CrowdTangle, which is part of Facebook, and NewsWhip, which is an independent outfit. Those companies can provide you with useful data that allows you to compare the degree of engagement with posts from various pages. So, you can, in a sense, get a rough idea of how much people on the Internet are interacting with the product of any given post, and when you look at that using a variety of perspectives, you can find out, for example, that on any given day, a list, a top 10 leaderboard list of engagement with posts, generally, they'll show you that conservatives are very well represented, often predominant in that list. 

You can also look at lists of US elected officials, publications of various sorts and so forth, and consistently what you find is that conservatives are prominent and sometimes predominant. One example would be, if you look at during the height of the campaign last year, a couple of months leading up to November 3rd, and you compare the amount of engagement with posts by President Trump's page versus posts by now President Joe Biden's page, that of those hundreds of millions of interactions, Trump's page was getting 87% of the volume to Biden's 13%. 

Having those as a suggestive data, what that data tells me is that if the censorship were going on, it’s A—you're not doing a very good job of censoring, and B— more seriously, it's highly unlikely, I think, that you would get that kind of conservative presence if there was any kind of systematic effort to keep conservatives from expressing themselves.

Tom Lenard:

So, is that big imbalance primarily because Trump was posting a lot more than Biden? I mean, does that account for the big imbalance in the numbers that you just mentioned?

Paul Barrett:

Trump absolutely was more prolific. 

Tom Lenard:

Right. 

Paul Barrett:

And that goes as well for other pundits and commentators and others on the right. Fox News is much more prolific in most cases than many other cable television outfits, but again, I offer that only as something that's suggestive that Donald Trump was not being censored on Facebook. 

Tom Lenard:

No, I understand. 

Paul Barrett:

That’s a very limited claim. We looked at interesting analyses that had been done by organizations ranging from the German Marshall Fund to Politico Magazine to The Economist, each of which had taken various time slices of online activity to analyze, and repeatedly, for the purposes of our report, what you saw was a heavy representation of conservative voices. None of that is definitive proof that there isn't some type of a bias, but it's suggestive. 

Finally, we turn to what most people actually allude to in talking about this issue, which is to say the vast array of anecdotal examples of people being kicked off Twitter or Facebook or YouTube, et cetera, and what I found when we took the prominent examples and sort of lined them up over a period of years, in a vast majority of cases, certainly with the most notorious, most celebrated instances, there was a reason apart from sheer ideology that people had been sanctioned. Whether that's Alex Jones or more recently, Donald Trump. A big part of the problem with the way that platforms handle these situations is that unlike celebrated cases, like Alex Jones or Donald Trump, they often don't explain themselves, and so people who have experienced the sanction or seen their ally or friend be sanctioned aren't told exactly what they did wrong and exactly what rule was violated, and so that leaves a lot of mystery surrounding the situation. Taking this whole body of information together... Oh, and the other thing that the anecdotal inquiry yielded was multiple examples of the platforms, this is chiefly Facebook, where the most journalistic reporting has been done, basically bending over backwards to accommodate participants on the right, to not kick them off, to erase black marks on their record, so to speak, for fear of the potential conservative backlash. 

Sort of another component of the claim of bias is that every time someone gets taken down who happens to be conservative, there tends to be a significant backlash against the platform. So, there are examples of algorithm changes being made, where someone points out that if we make this change, it's going to disproportionately affect conservatives because they're disproportionately violating the rule in question, and so Facebook makes the decision to back away from the change or to moderate the change or modify it so that it won't have that effect. Again, not a stance that one would expect from an organization that we're systematically trying to get rid of conservative ideas.

Tom Lenard:

So, I mean, that gets to one... Obviously you have a number of recommendations, which is something, time permitting, I'd like to certainly discuss a number of them, several of them. What you were just talking about, one of them is greater disclosure for content moderation practices, explanation every time a sanction... every time someone sanctions a post or an account. I mean, how often does that happen? I mean, what type of a burden does that put on the platform? And this happens a lot, I assume.

Paul Barrett:

Yes. It happens a great deal, and it would be a burden on a platform, a kind of burden that they, I'm sure, wouldn't have been able to even conceive of before 2016, but since 2016 and sort of the great awakening as to extent of the problem that the platforms experience, or are responsible for, I guess, depending on your point of view, they've been asked to, and to some degree, they have agreed to take on all kinds of burdens in order to clean up the sites, and I think this is another one that they need to embrace. They, you know, they are very bright people, and I can't tell you myself exactly how you would calibrate the algorithms so that each time an account was taken down or a piece of content was taken down, it would get stamped somehow. This is the rule you broke. This is how we found out about it, automated system or human intervention and, you know, whatever else might be relevant. I think they’ve got to figure out a way to do that. That would, I think that's something that users can reasonably demand, and I think it would go a long way toward preempting, you know, claims that “I've been personally victimized,” as opposed to, “Well, I stepped over the line.”

Tom Lenard:

Yeah, well, I think that's right. I mean, on Facebook, Facebook can probably bear that burden, but not every platform or potential platform is a Facebook, and so we also don't want to discourage entry into that space.

Paul Barrett:

I agree. And you know, if you were regulating this area of activity, either from within the industry or from the point of view of government, I imagine you could, as we sometimes do, set some type of threshold, so that an obligation like that comes into play when you reach a certain threshold of users or maybe company revenue, exceptions can be made for the smaller competitors.

Scott Wallsten:

So, this is more of a normative question, but, you know, whatever algorithms they use, some people will end up upset. You know, they'll leave up some things that some people think should be taken down and they'll take down some things other people think should be left up. And, you know, we see what happens when everything gets left up, you end up with a cesspool like Parler and, you know, sort of which way should the platforms err, you know, recognizing that they have the right to do whatever they want? If you were making that decision, would you err on taking too much down or taking too little down?

Paul Barrett:

I guess my answer to your question is they almost certainly need to take more down, from the starting point of today. I would say that their goal and their guiding principle should be to, you know, to do it fairly and vigorously, leaving room for people to have political debates and disagreements and so forth, but trying to enforce rules that they themselves have set and are enforcing to some degree, for sure, but I think need to do a more vigorous job. And I just wouldn't take sides on take down, you know, which side you would err on. There are going to be errors. I completely agree with your premise.

Tom Lenard:

But I mean, there also are probably, even when you get close to that line, differences in what people think is quote, “fake news” and what other people think is “fake news.”

Paul Barrett:

Right, and of course you don't have to take things down. You can, if you determine something, a candidate, three days after the elections says, “The elections were rigged. I'm the real President today, and I continue to be the President, and someone was trying to steal the election from me.” You don't have to take that down. You can label it. Just say this is false, or according to these sources and leave it there, which is what they did with most of President Trump's false statements in the wake of the election. It didn't take many of them down.

Tom Lenard:

Right. Right. Well, so, you have a number of recommendations for the platforms, the four or five, and then a number of recommendations in terms of public policy for the government, but this next one I found interesting, and I didn't, I wanted you to talk a little bit about, for the platforms, about how this would actually work. It offers users a choice among content moderation algorithms.

Paul Barrett:

Right. Well, you know, I'm always operating on thin ice when I'm making these kinds of suggestions, and we are collectively at the center because we don't have the benefit of being technologists ourselves. 

All that said, as sophisticated a social media figure as Jack Dorsey has said, that algorithmic choice is one thing he thinks should be explored in the future and that he thinks users of Twitter would embrace and it would solve some of these problems. I too think it would be better if there were Twitter Prime or Facebook Prime, which you get to choose, and to go back to Scott's earlier question, you know, going in more is being taken down. This is a cleaner or fussier site, which doesn't brooke the same type of insult culture that we've gotten used to, and if you want that, welcome. If you want the current site, you're welcome to use that.

Maybe even you could set up a system so that in order to get the site that's more vigorously policed, you'd have to pay a small subscription fee. I think all of this would communicate to the world that there are gradations that the platforms, the companies that own the platforms, are making choices about what you're seeing at the algorithmic level, and giving more of that choice to the individual, it seems to me, would both provide a venue that would potentially be freer of disinformation, hateful speech, and so forth, without forcing people to go in that direction. One danger though is, and I think Jack Dorsey has this in mind, is moving in the other direction. I think one of his ideas is that maybe it would be better to have very little content moderation, sort of a “Wild West” version of Twitter, and I'm not sure of the social utility of that.

Scott Wallsten:

It sounds like what you're describing is kind of a version of Reddit where, I mean, there are, I don't know how many thousands of subreddits and each one has its own rules, and I don't think there's a lot of algorithmic decision-making, but each one has moderators and it has its own sort of rules and codes of conduct

Paul Barrett:

Well has Reddit another model. The volunteer moderators, the self-moderation, basically, I guess if you want that, you go to Reddit.

Tom Lenard:

So, I mean, I don't wanna spend too long on this, but with this, I mean, obviously, there's this concern among some people that people, you know, people just stay in their own bubbles and they only see stuff that is consistent with what they already believe. Would this reinforce that? You know, would somebody choose, “Well, I'll just choose the algorithm that filters out everything that Sean Hannity says, or similar things,” and the other person says, “I'll filter out everything that Rachel Maddow says,” and we'll never hear the contrary view.

Paul Barrett:

Yeah. You know, I suppose it has that potential. Anything that potentially provides an incentive for people who are at an extreme to move away from the general platform and go to a more extreme outlet, whether that's, “I don't like Facebook anymore because they're doing too much content moderation. So, I go to Parler or to Gab, or I leave social media altogether, and I move to the world of end-to-end encryption and only talk to my buddies by a Signal or Telegram. That's where we'll talk about how we're going to invade the Capitol next time.”  Those are risks. That's sort of like the risk of policing the main platform, and I acknowledge the risk. 

And at the same time, I still think it's worth having reasonable rules and enforcing them and trying to address the problem of people collecting in bubbles elsewhere. As I say this, having a couple or three variations of the Facebook platform, I don't think would exacerbate terribly this already existing problem you have in social media where people believe things based on their prior views and more or less interact only with their ideological friends.

Tom Lenard:

What do you think of the Facebook Oversight Board up to this point?

Paul Barrett:

Yeah, I think it's a very interesting experiment. That's the first point. Second point is it deserves to have some time to operate. The people they've chosen have impressive resumes and seem like very worthy individuals to engage in this kind of task. 

You know, I think if you look at their first five decisions collectively, it's very interesting. These were not momentous individual cases, but they, in four out of five of the cases, they actually contradicted Facebook's findings and said, “Facebook got it wrong the first time around, and they need to flip and let the material stay up rather than be taken down.” There was an emphasis as kind of a theme that ran through those decisions that the members of the Oversight Board were dissatisfied with the clarity of Facebook's own rules. You know, it'd be quite interesting if that inspired Facebook to seriously look at their very convoluted community standards, which are very hard for a lay person to wade through. At least that's been my experience.

And, you know, and I sense that there was, to get back to Scott's “which side you want to err on” question. I sense maybe, I don't know if I'm right, the spirit of the First Amendment, the spirit of supporting free speech runs strongly through this body and that's... we'll see how that bears out over time, and of course they have a momentous decision to make in the near future on the fate of Donald Trump on Facebook. 

Tom Lenard:

Do you have a prediction?

Paul Barrett:

I would say if all you could do, if you were limited to the universe of their decided cases and that's all you have to look at, I would think that Donald Trump has more than a fighting chance to get reinstated. 

Because I would say that it would be quite possible to see the Oversight Board say, “You know what? Your rules surrounding incitement are kind of vague in this way and that way, and it's the President, and people... There's a huge First Amendment issue with his being able to speak, but also in the broadest possible sense, just the exchange of views and people being able to hear what he has to say, even as ex-President, and therefore we reverse,” which I personally think would be a shame, but I'm not on the Oversight Board. So, I don't think they're going to ask me

Scott Wallsten:

Since we're in the role now of inferring and predicting, and of course can't know whether we're right, what do you think is the reason that so many conservatives have adopted the view that they're being discriminated against when the evidence shows that they're not, and these are platforms that have been good for them? What do they get out of these claims?

Paul Barrett:

Well, the first and most important answer is that these claims rev up the base. That leaves a big question as to why exactly they wrote [inaudible], but that, you know, I think President Trump, when he was President, you know, tested this and related themes on his rallies and he got a big reaction, and so he did it again. That's one thing.

Second, looking back through history, attacking the media has been a go-to Republican strategy, at least since the Nixon administration. Of course, media of a different sort at that time, mostly the broadcast television media was Nixon's big obsession, but as you roll forward through time, you find the sense that, “Oh, those liberals in New York and Hollywood are after us again.” 

Second, I think as Republican politics took its turn toward pseudo-populism during the Obama administration particularly, and the coalescing of the Tea Party movement. I think the idea of pointing with ever more explicit terms, scored forces in Washington and New York and elsewhere, the Ivy League that are out to get you, that don't have your interests at heart. “You are losing your country. Make America great again. Take your country back.” Those are the common slogans, and I think as social media became prominent, exactly coincident with the Tea Party movement, the right began both to skillfully exploit social media and use it as a target to rev up people who have a variety of grievances, some real, some imagined. 

And you know, Trump, I think successfully stepped into that situation and melded his attacks on the legacy media, the “fake news”, the enemy of the people, with his very ironic attacks on social media, which were a crucial part of his becoming a successful national politician.

Scott Wallsten:

So then, if you were to take that and think about what it means for their policy objectives, in reality, they shouldn't want much to change, right? Because the social media platforms have actually been pretty good for them. So, it makes their attacks on Section 230 seem a little hollow because they would be among those most likely to be hurt if there were changes, right?

Paul Barrett:

I think that's exactly right, and I think their attacks have been hollow. We saw that all of last year Trump's Executive Order instructing the Commerce Department to go before the FCC and do something to screw up 230, it was never exactly clear what, combined with his Twitter calls to just repeal 230, and then culminating during his time in office anyway, with the absurdity of his demanding that in order to, you know, he wouldn't sign the Defense Authorization Bill unless Section 230 were revoked as part of that, you know, that's just another bloody shirt now. I don't think the content of it means anything when you're talking about criticisms from the right, and I think Ted Cruz and Josh Hawley will continue to bang on that drum, and it'll be interesting to see whether, you know, how that interacts with, I think somewhat more serious minded calls from some Democrats to amend Section 230. Very different motivation, and I don't know what will come out of that sort of slow motion car crash.

Tom Lenard:

Let me ask you, we're getting close to being out of time, but there is a view out there and that's expressed in connection with the events of the last couple of weeks and what happened at the Capitol, that the culprit is the paid advertising model, and therefore the recommendation is to kind of get rid of the paid advertising model, and what is your view of that argument?

Paul Barrett:

There is a sense in which that's true. The paid advertising model produces the intense focus by the platforms on user engagement because they get paid advertising according to degrees of engagement, and in order to maximize engagement, they, years ago, crafted their algorithms to accomplish that goal of maximizing engagement.

And it turned out that playing on emotions, particularly negative emotions, anger, fear, and so forth, maximized engagement, and so the algorithms were allowed to sort, rank, and recommend content with those goals in mind. The final element of all this of course is personal data, which is part of the advertising relationship because the reasons advertisers value being on Facebook so much is because you can target your advertising according to the data that Facebook is scraping off every minute. I think if you reset your algorithms with different priorities, including how authoritative information is and so forth, you can have very different outcomes and you could have a Facebook that looks very different.

The problem is, is that as you say, the advertising model is their core business model. That is why Mark Zuckerberg is the billionaire that he is, and I suspect he's not going to change his basic business model. There is an alternative, of course. You know, there's the subscription model that would yield a far smaller product. I mean, because far fewer people, I think, would be willing to engage in the transaction of, I give you $50 a year to use Facebook, as opposed to now it's experienced as being a free service. Of course, it's not really free because you give them your data that they use to help draw in advertising, but I think there's a lot to that. And yet the question is, is whether or how to accomplish that end, and you know, I don't really see the means to doing that apart from some type of quite drastic government regulation that I think would be problematic, but it is kind of built in, this issue we're talking about.

Scott Wallsten:

That's interesting. You say that the platforms are designed to maximize user engagement. It seems like a reasonable thing to do, but then you're saying, you know, maybe they could be retooled so that it's weighted on how authoritative a given post is or something like that. What do you think? I mean, what would a social media platform like that look like? What would Twitter look like? How would Twitter be different if it did something like that? Would it end up looking a little more like a newspaper? So, it would be sort of curated content?

Paul Barrett:

I wasn't imagining a human curation site, but the answer to your first question, “Would it be a little more like a newspaper?” Yeah. I think if you set up a spectrum of sources or means of communication and at one end was, you know, the telephone company. Pure utility, don't hold them responsible for anything anybody says when they're on the telephone. The other end is, you know, the New York Times completely curated, nothing flows into those pages or onto that website without New York Times people essentially approving of it, held legally liable for misstatements and defamation. 

Facebook falls now somewhere in between, but if you adjusted its business model so that it did try to steer more responsible, authoritative, less sensationalistic information toward its users, I suppose it would move a notch or two on that spectrum in the direction of the New York Times. I don’t think you’ve got to go to that extreme. It couldn't go to that extreme because billions of pieces of content go up on it every day. What system, you know, could police all that? No system could in the first instance.

Scott Wallsten:

So, what do you think should happen with Section 230, if anything? When we started with this really weird spot where both then candidate Biden and President Trump wanted to get rid of Section 230 and then, like you said, there are Democrats who have ideas for reforms of it. What do you think should happen with Section 230? Where should we go with it?

Paul Barrett:

Right. Well, I would start with the premise that I think I'm guessing President Biden would be, and his team would be, willing to rethink his relatively casual statements during the campaign. In other words, I doubt he's a die-hard revoke Section 230 person. If you read the interview in the New York Times that's the main source for that, he's actually fulminating about Mark Zuckerberg, and it almost as if Section 230 seems to like, work its way into the comment almost obliquely. Anyway, so that's one thing. I think we have a possibility that something reasonable and moderate could be done, and my humble suggestion would be two potential methods for retaining the core liability protection that Section 230 provides but using an amended law as a source of leverage to get companies to be more vigorous basically in their content moderation and fact check and method.

Method number one would be using the law as a quid pro quo. You get to continue enjoying this protection, but you have to demonstrate certain things. Such as, for example, that your content moderation system, that the algorithms you use are not set up to skew toward sensationalistic and false and inciting material, but instead are set up to push people toward authentic material and the like. You'd need to be much more of a computer scientist than I am to explain how that would work in practice, but I think you could impose on the companies a number of obligations for how they moderate content, how they discuss how they moderate content in public ss we were talking about before, to explain it better and so forth. Method number two... 

Tom Lenard:

Would you have to demonstrate that too? 

Paul Barrett:

Yeah, well, that's a good question, and there would be two options. You could demonstrate it to the newly invigorated Commercial Internet Bureau of the Federal Trade Commission. So, that's another piece of legislation, or you could go even further and say, “You know what? Commercial internet is not really regulated in any meaningful way at all within the Federal Government. We need a digital platform agency or digital regulatory agency.” I'm not imagining that's going to happen in the first hundred days of Joe Biden's presidency, but I think it's actually worth discussing. Because out in the future, I think there's a very serious discussion to be had about why we sort of anomalously don't regulate this influential agency. 

And then just real quickly, the other approach would be to put in place additional carve-outs into Section 230, by which I mean areas where there is not immunity. Right now, the law came with carve-outs, violations of criminal law, violations of copyright law, in 2018 were added... the violation of certain sex trafficking laws.

You could say, we add to that list of areas that the immunity doesn't apply to civil rights violations, to harassment, cyber bullying, whatever categories you think are the most important, you could limit the reach of the law for claims related to those areas, and the platforms would thereby to be exposed to legal hazard, and presumably would have the strong incentive to step up content moderation, those areas. 

Scott Wallsten:

So, the carve outs, I mean, those seem to have worked, I guess, maybe to more or lesser or greater extent for different things, but there's precedent for that. But for companies that have the government approved algorithms, that seems to start infringing on First Amendment issues though, doesn't? 

Paul Barrett:

It is a big, big concern and a very valid one. You know, which is why you might want to frame the obligation more in terms of transparency, show us more, not all. You have trade secrets. We understand. Show us, somehow, let's come to… let’s find a way to show us what you're doing when you're doing this content moderation in connection with say, hate speech. How does it work? We actually, on the outside don't really know in any detail how it works and by obliging the companies, even just to show, that might create an incentive to do it better. 

Tom Lenard:

Okay, this really is the last question. Could you talk a little bit for those who may not be familiar with the center, what the Center for Business and Human Rights at the Stern School does aside from publishing your paper? 

Paul Barrett:

Sure thing. The center was founded in 2013 by Mike Posner, who, before coming to the Stern School of Business to become a professor and to start this center, had for many years, like 30 years, run a human rights group, which started out as the Lawyers Committee for Human Rights and morphed over time into Human Rights First. 

Then during the first Obama administration, he served as Assistant Secretary of State for Human Rights, Democracy, and Labor. When he got done with that tour of duty, he went to NYU, took his professor's job and simultaneously started this center as a tiny operation. It has grown to being a slightly bigger organization, and it looks at selected industries asking the question, “How could these industries improve their conduct, such that improve human rights overall, not create harmful side effects either for the employees, the communities, or even the societies around them.” For example, we have kind of a specialty in looking at the clothing industry in places like Bangladesh, immigrant labor in the Middle East. We have an ongoing research project on the effectiveness of ESG criteria in all industries. 

And starting with my arrival, or actually slightly predating my arrival. Mike began to have the center look into the social media industry with an eye toward problems like extremist recruitment by ISIS and Russian interference, and that has sort of broadened since I arrived. 

Tom Lenard:

Well, great. I want to thank you, Paul, for taking the time to spend with us and we really appreciate it. It was a very interesting discussion. Thanks.

Paul Barrett:

My pleasure. Thanks for having me. 

Scott Wallsten:

Thanks so much.