Visit techpolicyinstitute.org for more!
July 16, 2020

Section 230 Series: Center for Democracy & Technology's Alexandra Givens

Section 230 Series: Center for Democracy & Technology's Alexandra Givens

Alexandra Givens is President and CEO of the Center for Democracy and Technology. Prior to CDT, Alexandra taught at Georgetown Law School where she founded the Institute for Technology Law and Policy and led Georgetown's Tech Scholars Program. She was also a founding leader of Georgetown's Initiative on Tech and Society. She previously served as Chief Counsel for IP and Antitrust on the Senate Judiciary Committee working for its then Chairman and Ranking Member, Senator Patrick Leahy. She developed legislative and oversight strategy on matters, including patent reform, federal trade secrets legislation, net neutrality, First Amendment issues surrounding online speech, access to medicines, and oversight of mergers and antitrust policy. She began her career as a litigator at Cravath, Swaine, and Moore in New York City and taught for five years as an adjunct professor at Columbia University School of Law. She holds a B.A. from Yale and a J.D. from Columbia. She serves on the board of the Christopher and Dana Reeve Foundation and is a mayoral appointee on DC's Innovation and Technology Inclusion Council.

Liked the episode? Well, there's plenty more where that came from! Visit techpolicyinstitute.org to explore all of our latest research! 

Transcript

Scott Wallsten:

Today, we’re delighted to speak with Alexandra Givens, President and CEO of the Center for Democracy and Technology. Prior to CDT, Alexandra taught at Georgetown Law School where she founded the Institute for Technology Law and Policy and led Georgetown’s Tech Scholars Program. She was also a founding leader of Georgetown’s Initiative on Tech and Society. She previously served as Chief Counsel for IP and Antitrust on the Senate Judiciary Committee working for its then Chairman and Ranking Member, Senator Patrick Leahy. She developed legislative and oversight strategy on matters, including patent reform, federal trade secrets legislation, net neutrality, First Amendment issues surrounding online speech, access to medicines, and oversight of mergers and antitrust policy. She began her career as a litigator at Cravath, Swaine, and Moore in New York City and taught for five years as an adjunct professor at Columbia University School of Law. She holds a B.A. from Yale and a J.D. from Columbia. She serves on the board of the Christopher and Dana Reeve foundation and is a mayoral appointee on DC’s Innovation and Technology Inclusion Council.

Sarah Oh:

CDT filed one of the first, if not first, complaints in the D.C. federal district court for declaratory judgment and injunction against the President’s May 28th, Section 230 Executive Order. The basis of CDT’s complaint is that the EO violates the First Amendment and has a chilling effect on free speech. Alexandra, could you tell us a bit more about CDTs legal action here?

Alexandra Givens:

For us, when we saw this Executive Order come out, really everything about it from the context in which it was issued to the language it uses, to the scattershot approach that it takes is shows that it’s a clear effort to deter social media companies from fighting misinformation and voter suppression on their services. To us, when we see that it really is just a vital question of whether social media platforms feel empowered to moderate misinformation and voter suppression. And the reason why that matters so much is in particular, when we look at the factual context in which this arose and the fact that it’s an election year. In the case, our central claim is that the EO itself violates the First Amendment. First, it was issued in clear retaliation against Twitter for the steps that it took to fact check misleading tweets that the president issued about mail-in voting, right?

But second, when you look at its framework going forward, what it’s doing is threatening platforms that if the government disagrees with their content moderation practices, they could face penalties and retribution, including stripping Section 230 protections, withdrawing government advertising funds. That’s really problematic. The way we see it, really, no matter how you feel about whether social media platforms are doing enough to moderate content on their platforms, and that is an active debate that is happening, we should all be deeply worried by the prospect of government agencies or state attorneys general having a cause of action to punish services for their editorial choices, especially when those editorial choices are things like fact-checking speech by the president during an election year.

Sarah:

What’s fascinating to me about the EO and just this First Amendment angle to Section 230, is that there’s kind of a misunderstanding between big tech and government and First Amendment jurisprudence. Could you give us a 101 on how we should think about the First Amendment and government speakers and big tech or Twitter? 

Alexandra: 

There’s a lot of confusion. As a constitutional matter, the First Amendment prohibits government actions abridging the freedom of speech, right? With an important focus on who the actor is there. It’s about government actions. So when we see the Executive Order, it’s really quite perverse to see the EO wrapping itself in this mental of free speech and the First Amendment. What you see is the president retaliating against a platform that sought to provide more factual context to misinformation that he was spreading about the election. And you see an effort to have government actors begin evaluating whether or not platforms are being politically neutral and withholding taxpayer funds if they don’t like a platform’s moderation policies. That’s a recipe for the government to strong arm platforms whose moderation policies they don’t agree with.  

So that’s what the First Amendment is focused on trying to stop, right? What they’re talking about there is deeply troubling for the First Amendment. I think it’s really important to try and draw more nuanced distinctions between government speech on the one hand and things like the editorial choices that platforms are making when they’re acting. One of the reasons why that criteria is so important to distinguish is that a lot of the moderation that happens on social media services and online services happens because they have the right to moderate because they don’t have the same restrictions that government has in terms of being able to curate content on their platforms. For example, like most disinformation that we worry about as a society really is actually protected speech that the government can’t lawfully restrict, but platforms are able to take action against them. And we, I would say that we want them to do so, particularly in this type of a context. Again, that’s why that distinction really matters.

Scott:

Do you think – I’m not a lawyer, I’m an economist, so I’ll ask the questions that maybe are not coherent legally – but we used to think of the First Amendment as being more or less sacrosanct. There’s been a lot of opposition to the EO, but there seems to be more agreement across the left and the right that they want to see pressure on big tech to censor content, maybe not the same content, but Republicans are worried that there’s a bias against their posts, even though the data seems to show that’s not the case. The left is concerned that misinformation posted by people like the president stays up, is this, do you worry about this?

Alexandra:

It’s true that platforms play just an enormous role in our society today, right? Like when we think about the ways in which we connect with one of those, the way in which we find community, the mediating function of the platforms is just enormously important. I think when we look at that, there are real concerns about the type of speech that can be amplified on platforms and what they’re doing to respond to it. Those concerns are real, right. And really importantly they’re not, the harms that people are worried about, aren’t felt equally by everybody. So when we look at the lessons from the 2016 election, for example, you look at what the Russian internet research agency was doing. Black audiences accounted for over 38% of the U.S. focused ads and a far higher percentage in terms of the click rates for what they were doing. You can see kind of conscious manipulation. 

All of that really does create very significant concerns for us to worry about. That said, we still need to look at the protections of the First Amendment and why they matter and why that framework is in place, which is to say, when we think about kind of the responsibilities that we put on platforms to deal with these issues, we still need to think about what happens when you have over enforcement and the negative externalities that flow from that. So my camp, the way I typically think about things is that yes, we want the platforms to do more. We want them to be thoughtful. We want them to have clearly articulated policies and that’s really, really important. But, very critically, we need to think about where the impetus for that comes. When you start moving into some of the types of tools that people are talking about from legislation or from this Executive Order, they’re so hamfisted that they end up having really negative consequences that we have to worry about.

Sarah:

After the EO now focus turns to the regulatory agencies and also Congress. What do you think about Section 230? Like how, if at all, should it be reformed? Both presidential campaigns have expressed desires to change it, like Scott said. Does CDT have priorities in this area about Section 230 reform?

Alexandra:

I start from the first principle that Section 230 plays an essential role in the ecosystem, and I think one that’s not always that well understood because it’s become such a tagline or kind of headline level debates as opposed to really nuanced conversation. And to me, that importance isn’t just about in the past, as some people have tried to frame it as, you know, protecting an emerging internet, but it actually has a continuing role today creating the incentives for platforms to act. You know, for me, the reality is we want to encourage services to host a wide range of content generated by users, right? When you think about the ways in which people communicate and connect with each other, that service really matters. And, and that is not because we care about what the tech companies think per se, but just because of the way it creates for people to be able to find each other and find community.

At the same time, of course we want platforms to be able to take proactive steps, to address harmful speech and to create a more kind of structured hospitable environment on those services. And Section 230 helps them achieve that balance. So I think that often gets mis-portrayed and I think that balance and this space that 230 creates for platforms to do that is really, really important. And I think when, you know, people kind of get the tagline of, “oh, let’s just repeal it,” they’re kind of forgetting, or aren’t really appreciating the full benefit of that balance. And again, that really critical piece that it’s the 230 immunity coupled with First Amendment protections that let platforms curate what is happening in their space in a way that can be very useful for people who are being targeted by hate speech or voter suppression efforts, for example. So that’s where I start from. Now at the same time, do I think that platforms can and should do more? Yes, for sure. And CDT has been really active in that space for a long time, and I’m happy to talk about some of the things we pushed for. But, to me, I just get very cautious when we start to think about what the legislative solutions look like, because it is such a broad tool or a blunt tool for what is a really delicate balance and a really delicate, yeah, a really delicate balance that the equity is on each side of that debate.

Scott:

How’s this debate playing out in other countries where they may not have something like the First Amendment or something like Section 230, but the underlying debates about big tech at least are the same? I know when you talk with your colleagues in other countries, how do the things that they do differ from the things that you do when your, maybe your objectives are very similar?

Alexandra:

Yes, we spend a lot of time looking at what’s happening in Europe and CDT has an office in Europe and actively participates in those conversations. And what you see is an increasing movement to try and address harmful content through regulatory measures and often pretty significant consequences for free speech. One famous example, that’s being covered in a number of different reports is all of the activity that stemmed from the Europeans actions around online terrorism content that put very strong incentives on platforms to be aggressive in their takedown. Again, very important goal, right? Like there’s not debating the goal of that at all really deeply important objective, but as a consequence, because of the fear of kind of government oversight and the level of penalties that were being discussed, et cetera, what we saw was platforms kind of just deviating to the easiest solution, which is overbroad take down policies. 

In that instance, you had multiple NGOs who had been posting images of war crimes for recording and authentication purposes, having their content being taken down because it violated the policies. When you think about the, again, the broader balance of equities and the type of environment that we want to achieve online, that does not seem like the desired outcome. So again, to me, that’s an important lesson about why we need nuance here. And then we have to acknowledge that when you have, like, when you add that added layer of legislation or kind of government action is the hook, very frequently the tendency is for companies to just take the safe road and the safe road comes at the expense in particular marginalized communities that are trying to have their speech heard online.

Sarah:

With American First Amendment jurisprudence what’s so ironic is the government speakers are using Twitter so freely, and they might be the first to be taken down if it’s regulated, which is part of the irony or the political theater in this current debate.

Alexandra:

For example, they’re increasing conversations about reviving aspects of the Fairness Doctrine, right? And so should social media platforms or services have this obligation to give to an airtime. And to me, that strikes me as an idea that’s just particularly ironic in a way, because that the Fairness Doctrine was hardly uncontroversial for the many years that it was in place, but it also just helps illustrate, I think, a couple of different things. One, that the environment, when you think about kind of translating from the broadcasters to where we are now, in terms of internet outlets, it is such a different universe to even think about what must carry rules or what equal airtime would look like. But it also, you just run straight into these really tough questions, like who would be the one to police that, right? How on earth would you enforce that standard?

What type of chilling effect does it have when people are trying to air the voices of activists on one side of the debate, for example? So there are just, again, really serious consequences that flow from that. So it’s a simplistic headline that can look appealing, but like so many things in this space, when you start to unpack it and think about the true impact of who would the arbiter would be, and then what the effects are in terms of the speech that is carried, again you would move towards the mean and the mean excludes voices like black lives matter protesters, right, as much as it does people on the far right. So again, I think it’s just another area where you have to be really careful.

Sarah:

Do you think that tech lash gets mixed up with this debate? I mean, obviously it does that tech is getting big and powerful and, but not quite as powerful as the government. How do you see, I don’t know, other arguments bleeding into Section 230 or First Amendment concerns?

Alexandra:

I think you’re exactly right. Part of what people are responding to is, again, the really significant role that platforms play in our society and as mediators of information of people’s ability to come together. And those are very real concerns when we look at the nature of public discourse of fracturing, of kind of a coherent public conversation, into many different filter bubbles and channels. And so I think people are rightly worried about that and look at some of the agents of change and the agents of change have in many ways been the arrival of these online services that allow us to communicate in this way. So I think that’s right, but I think also what happens here is that conversations that really might make more sense to be about consumer protection, frameworks, privacy framework, competition concerns, are all getting wrapped up in this conversation around what speech should and shouldn’t be allowed on these services. 

That proves really complicated because it pulls those factors together in a very muddying way, the separate point that I’ll make here, and this is what I was alluding to earlier is, to me there is a distinction between what we want to see the platforms do, and then what is the right way to achieve those goals? And that’s where I think conversations about legislative reform missed the mark in part was because they butt up so very quickly against the First Amendment. The point I was making earlier is that platforms actually can do a lot more to moderate discourse than Congress could ever force them to do because of the First Amendment and the inability of the government to regulate speech in that way. And so to me, I actually think a healthier place or where the real focus on dialogue needs to be is what does this social pressure look like to really force change by these platforms themselves as to how they’re approaching these questions? 

I should say, I don’t come to this as a deregulatory person. So this isn’t just kind of a standard song and dance of, oh, you know, let’s escape legislation and do voluntary initiatives. This actually is like a genuine concern as to how we get to the best outcomes for society. And on this particular question, I think there is a lot to be said for just ongoing public dialogue, around the types of behavior we want the platforms to engage in and frankly allowing some room for experiment and differentiation between them as opposed to a one size fits all, which is what would come through a legislative framework.

Scott:

Do you think that there is an answer that the tech companies can have that would satisfy, that would be satisfactory? They either have to err on the side of taking down things that should stay up or leaving things up that should stay down and whatever they do, somebody is going to be unhappy because our preferences are so diverse. How do, I mean, is there even a resolution possible?

Alexandra:

People are always going to be upset with the results of individual decisions, right? But one thing that we focus on a lot at CDT and I’ve spent a lot of time thinking about is at a minimum, having better clarity around the processes and trying to get more buy in for what those processes look like, which isn’t a foreign idea, right? That’s how our justice system works. People every single time a case is decided somebody leaves the courtroom upset with how that case was decided, but the hope is at least we have faith in the institution. I know the institution is applying a uniform set of rules and we have some trust that the system has worked in its way. So, you know, some key things that I think we really need to see have happen, and are beginning to happen in some spaces, but it could certainly need more work, first is just for the platforms just to acknowledge the vital role that they play in the ecosystem and that they do have an active responsibility in mitigating online harm. 

And then starting to think more about what these processes are, right? So having clear and consistent rules and policies about what moderation practices should look like and applying them fairly across the board. I think having processes in place to help people appeal removals and decisions to keep up content are important so that people do feel like they’re being heard. It’s not just an automated content moderation system that is impacting their speech. And the other piece of this that I do think is really important is for platforms to consider the range of different harms, the types of things that we are worried about happening on their platforms, and calibrate responses that are suitable to that particular issue, right? So acknowledging that your response to misinformation may be different than it is to hate speech may be different than it is to nonconsensual pornography. And thinking through in each instance what is the right response there and making sure that they have affected stakeholders at the table as they build out what those processes look like. I think that that’s a critical component just to get more buy in into the system.

Scott:

Do you think this may be pushing this in a direction that you don’t want to go, but, or that doesn’t actually call for, but do you think that our existing institutions are capable, are adequate for dealing with an environment like that? So where the, a firm has a very clear set of rules and how it will moderate, make its decisions and all, everything that you described. And then are we set up to somehow deal with that? Like is the FTC the place, does it become a consumer harm because a firm may or may not have violated its own terms of service or do you lean more towards the, we need a, some kind of agency focused specifically on digital firms?

Alexandra:

That’s an interesting question. To the extent that the enforcement mechanism here is XYZ platform lays out its terms of service and then when it deviates from them, the FTC like does traditionally move in that framework. I’m a strong advocate for strengthening the FTC. I think they need far more resources than they get. They’re like this tiny guy in the corner that just fights way above their weight class, right? And particularly as so much of consumer harm now does relate to the online environment. They just need far more resources to deal with that. For me, the jury is still out on whether there’s an advantage to creating a new agency in that space or just empowering the FTC. I’m a little bit of an institutionalist. So my natural tendency is to say, let’s strengthen what we have and make it better as opposed to just creating something new from whole cloth that can be slightly confusing in terms of where the responsibilities lie. 

But either way, I do think it’s an important conversation to be had because what it is doing is really forcing lawmakers, people in civil society and elsewhere, to sit down and think what are the goals that we’re trying to achieve here. So for example, in the conversations around a digital services agency. To me sure, perhaps content moderation comes up in that conversation a little bit, but to me it’s actually much more about privacy issues, kind of other types of consumer harm that results from online services. And I do think it’s time for a really ripe conversation around what our enforcement and oversight looks like in those spaces and how we best approach those questions.

Sarah:

Part of the solution is algorithmic tools. There’s so much data flowing through these platforms. Shouldn’t there be some algorithms that can filter through the user generated content? What do you think about algorithms and how, you know, our current institutions handle them? 

Alexandra:

Sure. On the one hand, it’s very appealing, right? Particularly when you read just the horrific stories of the emotional toll that it takes on the people that are charged with doing content moderation, they have a new group that is kind of supporting them that was founded a couple of weeks ago, which is excellent. I’m looking forward to hear what recommendations they come out for on how to better improve the situation for our human content moderators or trust and safety experts. But in that regard, the AI, I think, is in some ways appealing. On the other hand, I do think it raises real questions and we need to proceed with caution as we think about its use. So CDT, for example, just a few, a couple of months ago, sent a letter to a lot of the online services saying this is actually, as many of them moved into increasing use of AI, a result of the pandemic and having to send people home, this is a real opportunity to actually study what’s happening and what the consequences are. 

When you look at how algorithms learn, again, they learn from seeing existing patterns and then reinforcing the mean, and there’s documented evidence that that ends up having a toll, for example, on voices that, you know, sound different, right? So on the speech of African Americans is read differently by the AI and maybe more likely to be flagged. That’s a real concern when we think about, again, the importance of voices being able to express themselves on social media sites. So from a free expression perspective, I think it needs really careful thought to again, look at that flagging system. And I don’t see a world where the AI takes over, right? Because again, you need, part of what we need here is a stronger belief in the due process that’s associated with take down and keep up decisions and humans don’t feel that way if it’s the AI making the decision, right? You need like nuanced value judgments, and we need to have faith in who’s making those nuanced value judgments. 

The other part where I do think new technologies may be more useful is the use of hashing technology to help it. Once content has been deemed to be truly violative of terms of services. And you need quick take down there. I think, you know, obviously hashing technology is deeply important and I actually think there could be even more creative and thoughtful use of this in those instances. We see it, for example, now in the terrorism related content, it’s that technology is used frequently. I think people have concerns sometimes about how it’s used and think it could be used better, but in the nonconsensual pornography, that’s another instance where I think if one of the obligations that I think is important for the platforms and for advocates to be thinking about is just how to create a more streamlined experience and faster response for people that have been victimized and are trying to have stuff taken down quickly while still allowing for due process protections but you know, how do we get that stuff down fast and as consistently as possible. And I think we need an all hands on deck moment to be even more responsive than the platforms are being now.

Scott:

I assume that something like that has to use algorithms and AI, just the scale is so enormous. Do we have, do you think we yet have the ability to monitor the types of biases that algorithms that AI can lead to? You had a piece in Slate, I think about bias against people with disabilities in AI. There’s this great book by Janelle Shane who talks about all of the really bizarre outcomes AI can have. It could end up biased against attributes we don’t even really even know exists until the AI decides it’s biased against them. How do we police that?

Alexandra:

I agree it’s a real problem. One of the things that I’ve written about, and that Slate piece that you mentioned is that when you go beyond kind of traditional racial categories that the census uses, for example, auditing for bias is actually really hard. So right now there’s a big movement where there’s been a big movement in recent years to raise awareness about the potential for bias in AI. There are conferences, more professorships than ever being created to study this important issue. We’ve done a really good job raising awareness. Now, I think it raises really significant questions around how do you actually detect and how do you respond to these problems? And on the detection piece of this, very frequently, one of the areas that I’ve studied is actually in the use of algorithmic informed decision making. So in the employment context, for example, kind of hiring tools that screen applicants, one of the ways that data scientists will tell you that they can audit is by, you know, putting in a certain number of people and seeing do African American candidates make it through at the same rate as the occasion, you know, within rounds at the same rate. 

So you look at that as an evaluative measure. Okay, sure. Same thing could be done for speech in theory, but then you start to think about far more nuanced categories, right? So if you’re trying to run that AB test to see how someone with a disability is faring, like, what does your sample set look like? Which disability are you focusing on? There are, it’s an infinite spectrum. That’s literally how we think about the disability community. And so it’s far more complicated. The same thing happens, you see all of these articles about gender bias in AI against women and you think, well, sure, that’s fine. Like you can, you’re maybe measuring this with traditional categories of gender, but what about nonconforming genders, right? Nontraditional gender categories. I think it’s a really important problem. So yes. And so then pulling it back just to tie up the loop and bring it back to content moderation practices. I think there’s a lot that we just don’t know about how automated tools that are flagging content may end up marginalizing voices. So again, that’s why the promise of having avenues for people to flag concerns, having confidence that those concerns actually will be heard and reviewed. Even if that requires, you know, human review at a massive scale, is deeply important. Again, just because of the central role that these platforms play in our lives.

Sarah:

That’s a fascinating concept. We still need human review, but we need computers to handle the volume. There’s no way a human can check every social media post. You do need computational power to filter content. Do you think there is a baseline? Are we always talking about the Nirvana fallacy? Like the perfect, I mean, there is none because we’ve never seen this much data before. How would you describe what’s a reasonable expectation for a baseline?

Alexandra:

That’s a great question. This is going to be a case of constant striving in part because the world outside is a horrible place, right? I mean, it’s a biased place. People spew hate speech all the time. There are, we aren’t civil to each other. So it is not a surprise that when you move on to the online environment, those same issues are not only replicated, but amplified. So yes, I think that is, it is always going to be a problem. That doesn’t mean that we shouldn’t try and shouldn’t keep pushing forward in really aggressive ways to try to seek what the right balance is going to be. Again, I come down to just increase, focus on the process that these platforms are using in an effort of kind of constant refinement to see what they can be doing to move faster, to move better, to respond to really egregious speech. 

At the same time to set up those review mechanisms for people who feel like they’ve been wrongly taken down so that we have those kind of checks and balances in the system to make people feel that they’ve been heard. I think also there are just, we are going to have ongoing conversations around what the governance mechanisms for those look like, right? When you look at platforms that really do center the speech of billions of people at a time thinking through like who is the right arbiter of decision making within that ecosystem is really troubling and a hard challenge. I think the companies themselves are obviously grappling with as well. I think in many ways, they’d like to not be making this decision with all the weight resting on just their shoulders. So I think there’s a really important ongoing conversation to be had there as well. 

Obviously, Facebook is doing this with the oversight board. Civil society is weighing in about pluses and minuses of that approach. But to me, that’s a deeply important public conversation. In particular seeing what Facebook is doing versus what Twitter is doing versus what others are doing, there’s an important experimentation that’s happening here in terms of approaches that I think is important as we keep trying to push forward to figure out what is the right balance to strike. In particular, acknowledging that not all platforms are exactly the same, right? That sometimes they are catering to different audiences and they could perhaps have mechanisms that reflect that reality. 

Sarah:

Great. Well, thank you so much, Alexandra, it’s an enlightening conversation and definitely good topics from somebody who’s been in this field for a long time. Thanks for joining us today. 

Alexandra:

Thank you for having me. I really appreciate it.