Visit techpolicyinstitute.org for more!
June 24, 2022

Joel Waldfogel on Privacy and Innovation

Joel Waldfogel on Privacy and Innovation

Joel Waldfogel is Associate Dean of MBA programs …

Joel Waldfogel is Associate Dean of MBA programs at the University of Minnesota's Carlson School of Management. He was previously the Ehrenkranz Family Professor of Business and Public Policy at the University of Pennsylvania's Wharton School, where he served as department chair and associate vice dean. Prior to Wharton, he was an associate professor of economics at Yale University.

Liked the episode? Well, there's plenty more where that came from! Visit techpolicyinstitute.org to explore all of our latest research! 

Transcript

Scott:

Hello. And welcome back to Two Think Minimum, the podcast of the Technology Policy Institute. It’s Thursday, June 9th, 2022, and I’m Scott Wallsten, president and senior fellow at TPI. I’m here with TPI president emeritus and senior fellow Tom Lenard and senior fellow Sarah Oh Lam. We’re delighted to have as our guest today, Professor Joel Waldfogel, to talk about a fascinating new paper he has on privacy and innovation. But before we get to that, let me properly introduce Joel. Joel Waldfogel is associate dean of MBA programs at the University of Minnesota’s Carlson School of Management. He was previously the Ehrenkranz Family Professor of Business and Public Policy at the University of Pennsylvania’s Wharton School, where he served as department chair and associate vice dean. Prior to Wharton, he was an associate professor of economics at Yale University, and all of that led him to being here today with us. Joel, thanks for joining.

Joel:

Hey, my pleasure.

Scott:

So let’s just start off pretty open ended. You’ve got this paper titled, “GDPR and the Lost Generation of Innovative Apps.” There’s a whole lot in that, just in that title. Tell us about the paper.

Joel:

Sure— no, happy to. GDPR is this, this well-known regulation that went into effect a little, little while ago in Europe— and actually not just in Europe, because it binds on operations in Europe or operations that have customers in Europe. So really, it essentially binds on the whole planet. But what it does— it has made it more costly. The terms of, of this rule, make it more costly for firms that— well, in our case, we’re looking at the app market— the firms that make apps. In order to participate, they need to have various kinds of compliance and that by itself—not necessarily a bad thing—but in order to sort of meet the terms of the privacy regulations, there are some additional costs that firms and products face. Now this has two kinds of potential consequences. One of which seems like a big deal, but really isn’t, and one of which is hard to see, but may be a huge deal.

The part that, that is sort of obvious, is that turns out there’s been an enormous exit of apps from the, the Android app market. This coincides pretty closely with the imposition of GDPR. Now people look at this and say, oh, you know, first of all, can’t possibly be or whatever. But the thing about this is, that it’s not frankly all that important. What’s going on here is that apps that turn out to be unsuccessful— their developers don’t want to face the costs of coming into compliance. So they simply exit. But this—although it’s an enormous number of apps, it’s something like a million, I don’t recall exactly—it’s not a big deal because they don’t have many users. That’s why it’s not worth bringing them into compliance. What turns out to be a big deal, and from the perspective of this paper— which by the way, is joint with three co-authors, Rebecca Janßen at ZEW, Reinhold Kesler at Zurich, and Michael E. Kummer at East Anglia.

Anyways, the big deal here is the impact, the potential impact on further entry. That is, do developers continue to bring new apps to market? What we do in the paper is kind of a lot of, I think, careful data work to try to look at the impact of the regulation on the tendency for developers to introduce new apps. And app entry falls by something like 50% in the couple of years after the imposition of the GDPR. So I’ll breathe for a second and then I’ll explain why that’s a big deal. <Laugh> if there are questions, I’m happy to stop.

Scott:

I mean, it certainly sounds like a big deal even without the explanation.

Joel:

Well, I mean, so it could be or could not. So, so one of the, the perspectives that I have in this paper with my co-authors, but also in some other work I’ve done with other folks, is to think about what happens. Either if something causes us to have a lot more products, you know, if something reduces the cost of entry as with digitization— I’ve done a lot of work on digitization and its impact on cultural industries. And in that context, digitization, although on the one hand caused piracy, which was bad news, it also caused reductions in costs, which was good news in the sense it was much cheaper to introduce new products. And so, we had lots of new entry in music, in movies, television shows, in books. Now, again, to stick with the point though, what’s important is— suppose you’re in a context where it’s hard to predict which new product will turn out to be valuable or successful.

And by the way, that is the context we live in, for sure in cultural goods and also, by the way, for apps. Well, in a context like that, something that causes an increase in the amount of entry that occurs is going to give us a whole bunch of products, many of which turn out to be not very valuable, not a lot of revenue, not a lot of consumer surplus, but a few that turn out to be really valuable and important. And so, you know, in the work with Louis Aguiar on music, you know, we show that we get this kind of random long tail from additional entry. Now, if you think the long tail’s a big deal and we agree it is, this is even a bigger deal. Because what’s going on is increasing the number of products in a context where we have no idea which products will succeed, gives us about as many good products per, per entrant as the existing products.

So, it turns out to be a huge deal. Now let’s get back finally to GDPR. What GDPR is doing is reducing the number of new products coming to market. Now, again, suppose against reality, suppose that we already knew exactly which products, you know, were going to be valuable. And the only products that regulation-increased costs eliminated were the products that were low expected revenue and low realized revenue. Because in this funny counterfactual world, we could perfectly predict which ones were going to turn out to be valuable. Well then GDPR would just kill off— or it would prevent the birth of a bunch of products, but it wouldn’t be a big deal because they’d be all the low value products. But in reality-land, where it’s very unpredictable, which products turn out to be useful. And by useful again, I mean it generates revenue and usefulness for consumers, consumer surplus. In that kind of world, reducing entry by— it’s not something like 50%, means reducing the number of products that turn out to be useful by something like 50%. I say something like, because it’s not a totally unpredictable world. It’s just a largely unpredictable world.

Scott:

In the paper, you, you know, you have a, a section addressing this specifically—does GDPR curb inefficiently, excessive entry sort of along this, this line, right? You know, how do you in this case, know the counterfactual? I mean, you say the answer is, is basically no, I mean it, it’s harmful, you know, but this is about predicting something that didn’t happen, and you’ve got a whole model behind it.

Joel:

Let me talk a little bit about how we come to this conclusion. Cause you’re right.It’s a bit subtle. Obviously, I think it is, maybe it’s obvious to others. But what we do is we ask the question, you know, when, when fewer apps are entering, do we get fewer apps that turn out to reach certain thresholds of ex-post success? And it turns out that, you know, when, when the number of apps entering falls by whatever X percent, the amount of usage accounted for by these new birth cohorts also falls by roughly X percent and even sort of more sharply, the number of apps attaining particular high levels of eventual usage also falls by similar percentage. So it really looks like the reduction in entry is a reduction across the board in terms of how good they will turn out to be good against it— again, in the sense of— economist’s sense of sort of usefulness.

Tom:

So presumably the, the reason that the exit of the existing apps is not so big a problem is because with the existing apps, we know which ones are successful and which aren’t. And what you’re saying is that the, the ones that exit are the ones are the low valued ones. And so it’s not such a big problem. It’s just the entry problem. It’s the big problem.

Joel:

Exactly. In our view, that’s exactly– I mean, I think the facts are consistent with that and we entirely agree. So there’s a, you know, this sort of dramatic looking spike in exit that occurs prior to, or with the onset of GDPR. And there is, I think, some earlier exit that occurs as, as, as you know, as like Google did some of its own cleanup, but that tends to drive out low value apps. So it’s not a big deal. It looks dramatic because there’s a graph with a big spike in it, but it’s not very dramatic from the standpoint of killing off apps that are accounting for much usage. The bigger deal is the sort of dog that’s harder to see bark, if I utilize some metaphors. Eventually, you know, as entry falls, the idea in the paper, you know, we have these sort of startling sounding statistics that consumer surplus falls by whatever, a third– it’s not immediate. That’s the long run implication of perpetually lower, of the lower entry that we see post GDPR.

Tom:

It is an annual figure.

Joel:

Yes, but it’s an annual, kind of, long run figure long run app since, since app last varying amounts of time. In some sense, this is after all the existing apps die and we’re just replaced by newer, smaller birth cohorts.

Scott:

Are you able to see anything about the composition of apps that left the market? I mean, we’ve sort—that, that’s— we know that they’re kind of on average low value, but do they focus on specific things?

Joel:

We don’t see much in the sense of, of kind of app category. I mean there is some additional evidence— or I should say some evidence that, that it’s GDPR-related in the sense that the like apps that, that are more privacy intrusive seems to be somewhat more likely to exit— when that’s, again, one of the senses in which the big extinction event of the low value apps is probably just good news because these are apps that aren’t very useful, but they’re also non-compliant. And so if one thinks that privacy is a worthy goal and I, we have no objection to that, then this is sort of, there’s no other value and, and, and the, you know, intrusiveness is going away. We don’t see shifts in the, sort of, categories of apps that operate.

Sarah:

Do you have a sense of where compliance with GDPR is? Are all these apps hundred percent compliant or is there an enforcement mechanism by the app stores or the government?

Joel:

There’s the threat of fines and they’re pretty substantial, these fines, and they have been imposed. I mean, I think there’s probably debate about what’s the probability-weighted, you know, the expected fine, but it is— there, there certainly— we were kind of attentive to discussions among the developer community with the run up to GDPR, and there was quite a lot of discussion of it. Actually, my co-authors conducted a survey of German app developers and, and, you know, they heard a lot of expressed views about, “oh, this is going to make it more costly for us to do stuff.” None of which is shocking. But it seems as though it’s nice to hear that the practitioners are actually perceiving the things that we imagine from theory would be important.

Tom:

And your numbers are, are from the Google store. So presumably if you accounted for, for apps from the Apple store, that would make it proportionately higher probably.

Joel:

Yeah. No, and, and in fact, we, we have some fragmentary data, which in, in the, I think the current version of the paper, the one that’s circulated where we do look at what we can about Apple. And it seems as though the phenomenon is occurring there as well. There’s both exit and reduction in entry on that platform. So it, it’s not just whatever Google did. It seems more like GDPR because it’s happening also in non-Google land.

Scott:

So people worry or hope–I guess, depending on which side you’re on–that while rules in one area will spill over to rules in another, do you see the Google, the Play store and the App store following GDPR rules, even here? Are some of those apps still available here?

Joel:

One of the challenges with writing this paper–this will get to an answer to your question, I hope in a second–you know, ideally if you have some policy that goes into effect in some part of the world, let’s call it Europe and there’s some other part of the world let’s call it the U.S. that’s untreated, you can just compare what happens in treated Europe and untreated U.S. As it turns out though, GDPR binds on any app that either operates in Europe or has customers who are in Europe. And so that’s why you see those annoying permission, you know, clickers on every, essentially every app or every, you know, website and, and so forth. We could not find a part of the world that was untreated. We, we couldn’t find a set of apps from somewhere that seemed not to have been affected by this.

Now, I mean, that’s a challenge from a research perspective, because usually you like to have a control and an experiment. But I think a lot of the folks studying GDPR have found that GDPR is sort of extra-territorial. I mean it is by construction. And so it’s hard to find parts of the world that constitute of control. I mean, in principle, probably like apps in parts of Asia that are maybe in languages not read or understood by Europeans, but we even, we couldn’t really find much compelling– looking, you know, kind of in, in, in those kinds of areas.

Scott:

You said that you’ve gotten, you know, some heat for this paper. So tell us about that and your response to those criticisms.

Joel:

Privacy is an issue. It’s a very serious issue. It’s an issue about which people, I think, have strong- strong views, strong feelings. And I think to be fair to everyone in this debate, a lot of what people are worried about is either hard to measure or hard to compare with the usual kinds of things that economists do measure. You know, so enjoyable apps, some loss of my soul, you know, how do you balance those? Facebook’s so much fun, but sometimes it gives you Cambridge Analytica and you know, the last five years have been great except for Brexit and fill-in-the-blank. So I think the people who are worried about this are worried about real things. And so our own view on this, and I think one misinterpretation that we lay ourselves open to by pointing out a cost of this regulation is the, the, you know, one can then jump to the conclusion that we’re saying, therefore, this regulation is bad.

And I think what we’re saying is, here’s a cost of this regulation. And so let’s then at least try to be careful about saying we care enough about the possible benefit to bear this cost. We don’t have the answer to that question, but we do point out a cost. And I think this cost follows very sensibly and naturally, and sort of predictably from the very nature of this, the product, something that reduces entry, but it’s unpredictable, which things are, are going to succeed. Therefore, you’re going to lose a bunch of things that would’ve succeeded. And so that, that’s a loss. But again, whether it’s big enough loss to offset the benefit of increased privacy is a different question. I mean, I will say, and now here, I shouldn’t say this, but I will– I mean, I think folks who study privacy have a hard time finding consumer-based evidence of a concern for this. You know, free Wi-Fi– you, oh you can have my soul for that. But I mean, maybe that’s not our better selves doing that. And maybe it’s nevertheless sensible for regulators to regulate. But again, this is what it costs.

Tom:

Maybe I misread or read the paper too quickly. You do suggest in one section of the paper that the benefits are small and potentially even zero.

Joel:

Well, so, if we look at the consumer behavior in and of itself– I mean, so it’s not– it could have been the case that consumers flock to the new privacy sensitive apps, which would’ve shown that via the consumer’s own choices, that there’s an enormous valuation of privacy, and it all wins there. We don’t see that. But again, I, I would acknowledge, and I think my coauthors and I have tried to be careful about acknowledging, that there may nevertheless be benefits we can’t quantify, but are still important. So, I think [the] contribution we’d like to make to the debate is, here’s what it costs. You can decide what the benefits are. They don’t seem to be demonstrated by consumers own behavior, but there might nevertheless be benefits.

Tom:

What type of benefits would there be that are not demonstrated by consumers behavior?

Joel:

I’m trying to be open minded here. I mean, again, I think people don’t always know what they’re buying. I mean, I’m not a, a hardcore behavioralist, but I would certainly admit the possibility that people– I’m not an expert on this, but I don’t take away from this paper that GDPR is necessarily bad. I just take away from this paper that GDPR has a cost. And so, it’s proponents then ought to probably say, “Hey, I think the benefits [are] enormous.” Be, you know, if there are proponents.

Scott:

So I mean, in the paper, you’re careful not to draw, you know, conclusion about GDPR, just like you were saying here. But you know, policymakers need information now and you know, there’s even a privacy act discussed in Congress. I think that may be a hearing next week. What would you say? I mean, we’re not going to talk about the details of that act. Cause one thing, I haven’t read it, but what do you say to them when you know, they ask, “should we do this? What should we do?”

Joel:

Yeah, that’s a very good question. And frankly, it’s the important one. Now I guess I would, I would want the proponents who see benefits from privacy to, to be as evidence based as they can about it. And where they maybe don’t have like evidence based on consumer behavior, to make at least some assertions. Okay. So I’m being paternalistic here. They might say, “I think it’s worth it to reduce the probability of some horrible thing happening.” It’s sort of like, I remember being an undergrad and learning about cost-benefit analysis and thinking we need a name for that? Just tallying up the benefits and costs? But in sometimes we do, right? Here’s, okay– if, if you buy our paper, then you buy that– there’s a cost. Now I want to put it on you to tally up the benefits, whether you can tally them based on consumer behavior or something else. I guess that I would say– maybe this puts a little more burden on proponents to say how valuable they think the benefits they’re achieving are. And to be kind of honest about which benefits are faith-based and which are empirical.

Scott:

That’s a tall order to ask people to be honest.

Joel:

Oh, sorry, <Laugh> I’m an academic.

Scott:

Yeah, it does seem like the debates here are often couched as benefit, benefit. You know, let’s do this and you hear people say win, win, which should always set off alarm bells, I think. So many things are tradeoffs. A lot of people who listen, you know, aren’t researchers themselves, what they care about is the results of what they should care about. It’s what helps them. But tell us a little bit about how you went about gathering the data, the things you had to think about. I’d like people to hear this because I feel misunderstood <laugh> as an empiricist. People don’t know what’s involved in this kind of work.

Joel:

And I want to use the, definitely want to use the “we” here, but the co-author team. I mean, there’s a lot of work in collecting these data because there isn’t like a list of apps at the Play store. And so, I mean, there was an incomplete list, but what basically, if you go querying searching for an app, it gives you a list of recommended apps. And so, the process involved the searching for all the apps, and then adding all the recommended apps to the list until the whole process converged. And we had all the apps in the sense that nothing additional was being added and then once a quarter for whatever it was– three years, we, again, I mostly my co-authors went back to the, the data collection process and got all the apps and saw which apps were new and which apps were no longer there.

Now, even that’s a bit complicated because of what I mentioned was you don’t really see all the apps initially. So we have this kind of delayed observation problem, which means it might be– usually we see an app in the first quarter after its birth or the second quarter, but sometimes it takes longer. And that by itself creates a huge problem for us–at least in my mind it’s–because it means at the end of the sample period, it looks like the number of apps has declined and apps haven’t entered yet, simply because we haven’t given them enough time to enter. This is sort of geeky inside baseball stuff. But I just want to make the point that we worried a lot about this because we didn’t want to attribute to this fact that it hadn’t been enough time yet to see things, the notion of actual reduction in entry. So we wanted to compare for equal amounts of time after entry. How many things have we seen by, let’s say six months after entry time. So we could be apples-to-apples in, in comparing before and after GDPR. I bet that didn’t make tons of sense, but I would like to make the point though, that it was– we tried to be quite careful and we tried not to fall into the trap of attributing to a data problem of phenomenon that we do think happened, but we think needs more work to be demonstrated.

Tom:

Did doing the research for this paper… Did you gain any insights? So let’s say we, we don’t really know exactly what the benefits are, but– so we don’t really know if it passes a cost-benefit test, but at a minimum we would like, like it to pass some sort of a cost-effectiveness test to at least get whatever benefits there are at the minimum cost. Did you gain any insights into how a regulation like this could be crafted to do that?

Joel:

So, the first thing that comes to mind here is just going to be the answer. That it’s a tough question in the sense that, this seems to be very much a context in which it’s hard to predict which products will turn out to be useful. And in contexts like that, if we knew which products were going to be turned out to be either not valuable or pernicious ex ante, we could just craft a regulation that somehow prevents them. But when it’s hard to predict, what’s going to turn out to be useful, it’s hard to know how to prevent the entry of things that are ultimately useful. So I feel as though there’s a, an inherent challenge that comes from this unpredictability feature– again, seems to be true of cultural products, seems to be true of apps, frankly, though it seems to be true of most new products, you know, most or many new products fail. So this, this unpredictability is tough. And I think there’s this sort of deeper issue, which is, how does an economy grow and why is innovation useful? Well, well, particularly when we don’t know what’s going to turn out to be useful, then taking draws is how we discover useful or invent or create useful things. And so, I’m really not answering your question except to say that when you sand in the gears and reduce the number of draws that potential innovators take– well in an unpredictable world, that means you’re going to reduce the number of winners that we find.

Tom:

So then people other economists who, like yourself, who measure this, those costs are relatively invisible.

Joel:

They are. It’s that dog that doesn’t seem to bark, right? So, but, but I mean, we can see a reduction in the absolute number of products coming in, but it is true. It’s the real cost here is the absence of something that might otherwise have been. That is a toughie to measure.

Scott:

The GDPR is obviously a public policy and, and the bill before Congress at state, those are public policies, but firms themselves make decisions about this. And, you know, Apple has done a lot, its own privacy rules. You know, the, the newest one being, you can ask the app, not to track you, so that they don’t have the data on you, but your paper would imply that this is also getting to this, could reduce the supply of high value apps in the App store. Why would Apple want to do that?

Joel:

So that’s an interesting question. I do think they, they like a curated environment. Traditionally, they’ve been, you know, they, they kind of the more like the walled garden, you know, where you, you don’t encounter flashlights that steal your soul and stuff like that. So I can see a value to that. I mean, because it could very well be, if I say, “Hey, I’m Apple and this is a safe environment,” that people like the environment. There might be fewer sort of high value apps down the road that there might have been. But at the same time, I think– now, I’m not accusing Apple of doing this, but you know, Apple could wait and see what turns out to be useful on the wild west and then encourage the developer to create it for the Apple environment as well. I see the benefit of providing a kind of safe environment. And I think Apple is traditionally done that at least I’m, that’s been my perception.

Scott:

What’s the next step in this research?

Joel:

These don’t seem connected. But I think that this perspective about unpredictability is helpful for understanding the welfare benefit of [things that] cause increases in the number of new products and things that cause decreases in the number of new products. So I, for my own part, I’m going to keep bringing that perspective to bear on some different questions. If you ask me in six months, I am going to have a good answer. One that I know <laugh> or one that I trust. I don’t currently have GDPR, additional sort of GDPR related things in mind, but I mean, finding at least fragmentary data about what was happening at– on other platforms was I think helpful. Everyone could imagine the same paper about Apple, if one had the data. But I think the broads of the phenomenon are happening there. The reduction in entry and so forth.

Scott:

The point about unpredictability applies to so many parts of this. I mean the data usage laws, for example. Innovations come from combining different types of data in new ways. Is that debate something that you, that you follow?

Joel:

No, I haven’t followed that. Even from your question though, I can see that predictability would– that would be a context where, you know, reducing experimentation seems potentially costly.

Sarah:
[Have] there been other studies similar to yours or looking at the question from a different angle? I know there have been some papers on GDPR, but they take a little different tack on, on the question.

Joel:

There have been quite a few. I had the pleasure of discussing some of them at a recent conference or, and there have been a lot of papers looking at well, a couple of kinds of questions and I, I won’t do full justice to this, but one kind of question is about kind of an unintended consequence, maybe another unintended consequence of GDPR. And one of the things it does is, it gives users control of their data. So they get to decide, should I trust you with my data or not? And so one thing that could do, and I think some research has found is that it, it tends to empower already large firms because they’re more trusted by consumers. And so it could have the unintended consequence of making markets more concentrated. Now that may or may not be a problem. We may be, you know, good providers get more business, that’s fine.

But I think it is, does run against some of the, some of the things some folks would like to accomplish right now. So one kind of study is about kind of promoting usage of products from already famous, successful high market share firms. Another kind of stud2022 looks at things like whether GDPR, you know, reduces intrusiveness of apps, you know, the extent to which they engage in tracking. And I think there’s a little bit of evidence of that. I think it does. And my recollection of the studies is that the, the effects seem somewhat fleeting and, and not enormous, but again, could still be very valuable. I don’t know of anything. That’s about the kind, the welfare benefit of the products to consumers, which is what we’re about. But there are a bunch of studies. It’s an active research literature, just trying to see what, what impacts GDPR has on a variety of, of outcomes, like which providers win and whether there’s a reduction in the tracking and things like that.

Scott:

Great. Well that takes us about to the end of our time. So thank you very much for joining us. It’s a great paper. We will put a link to it in the description and I definitely commend people to read it.

Joel:

Oh, thank you very much. So that’s been a lot of fun talking about it. It is interesting people react in various ways, but that’s fine. I guess it’s because it’s, maybe it’s an important question. And maybe this answer has some, some value to some part of the question, I hope.

Scott:

Yeah. I mean, if you don’t upset people with your research, then you know, really what are you doing. <Laughs>

Joel:

<Laughs> Right. But there’s an optimal amount, and no more.