Hello, and welcome to another episode of TPI’s podcast Two Think Minimum. I’m Chris McGurn TPI’s Director of Communications. This week, we're fortunate to be talking with Maja Brkan, who is an Assistant Professor in European Law on the faculty of law at Maastricht University in the Netherlands. Maja also has the distinction of being part of our AI conference which was held earlier this week. We will be discussing with her some of the issues that she covered and presented in her paper, which is entitled, Do Algorithms Rule the World? Algorithmic Decision Making and Data Protection in the Framework of the GDPR and Beyond.
Hello, and welcome to another episode of TPI’s podcast Two Think Minimum. I’m Chris McGurn TPI’s Director of Communications. This week, we're fortunate to be talking with Maja Brkan, who is an Assistant Professor in European Law on the faculty of law at Maastricht University in the Netherlands. Maja also has the distinction of being part of our AI conference which was held earlier this week. We will be discussing with her some of the issues that she covered and presented in her paper, which is entitled, Do Algorithms Rule the World? Algorithmic Decision Making and Data Protection in the Framework of the GDPR and Beyond.
Liked the episode? Well, there's plenty more where that came from! Visit techpolicyinstitute.org to explore all of our latest research!
Chris: Hello, and welcome to another episode of TPI’s podcast Two Think Minimum. I’m Chris McGurn TPI’s Director of Communications. This week, we’re fortunate to be talking with Maja Brkan, who is an Assistant Professor in European Law on the faculty of law at Maastricht University in the Netherlands. Maja also has the distinction of being part of our AI conference which was held earlier this week. We will be discussing with her some of the issues that she covered and presented in her paper, which is entitled, Do Algorithms Rule the World? Algorithmic Decision Making and Data Protection in the Framework of the GDPR and Beyond. For those of you who are not academics, we will start by finding out what the GDPR is. She will go into what she presented at the conference yesterday, as well as give some of her opinions on how the conference overall went. We are also going to be joined by Scott Wallsten, TPI’s President and Senior Fellow, as well as Sarah Oh, TPI’s Research Fellow. Without further ado, I hand it over to Sarah and Maja for a conversation that we hope you all enjoy.
Sarah: Thanks for coming Maja to Washington D.C. I enjoyed your presentation yesterday. Would you provide a top level summary of your research from yesterday.
Maja: First of all, thank you very much for inviting me and you did a great job in pronouncing my name and Maastricht University in the Netherlands. What I presented yesterday is the question related to automated decision making, or even better to say, algorithmic decision making. Algorithmic decision making is decision making based on algorithms. Algorithms, very simply speaking, is a set of steps to accomplish a certain task, but they can also make complex decisions, such as, for example, when you go to a bank they can estimate your credit’s record, credit score. Or when you go to insurance companies they can estimate your risk. Or they can also provide for targeted advertisements. When you surf the web, you book a room in New York, then suddenly you get a lot of advertisements for other rooms in New York. My research is focusing on legal issues of this algorithmic decision making and also the question of the right of explanation of citizens.
Sarah: Great. It’s a good time to ask more about the right of explanation and some definitions from the GDPR.[1]
Maja: First we should clarify what the GDPR means – it’s the General Data Protection Regulation which is the new data protection package in Europe, which is going to be applied as the 25th of May. In Europe, there is currently quite a lot of initiatives for companies to become compliant with the GDPR. With regard to the right to explanation more specifically, you should know that, this is a field which is quite a battlefield for different academics.
Academics disagree very much as to whether the GDPR introduces this right to explanation or not. I’m more on the side of the camp which says there should be a right to explanation even though it is not explicitly written down in the article. The importance of this right to explanation lies in the fact that people should understand why a certain decision was taken and to also be able to exercise their rights to contest the decision. They do have a right to contest the decision. These right can only be effective if they know why exactly the decision has been taken. Of course, there are a lot of obstacles to such a right to explanation. One of the main obstacles is the technological obstacle, the fact that you cannot really explain a complex algorithm, but further research is needed in this regard.
Sarah: Great. I do know the European privacy framework, that it’s more conservative, or more regulatory, than the U.S. framework. What kinds of burdens does that regulation place on businesses and new firms? What are the opposing views to the right of explanation? That it’s too costly? Or not well-defined?
Maja: Yes, that’s a very good question actually. The privacy package, I believe you said it correctly, can be seen as more conservative than in the U.S., because it does institute a very high level of protection of European citizens. Now, the burden that it places on the company is that it actually introduces a new concept and that’s a concept of risk assessment, as opposed to a rights-based assessment, which was previously applied in previous legislation. That means that all the companies when they process data, especially when they process sensitive data, such as data about race, religious beliefs, sexual orientation, they will have to make a risk assessment before they process that data, and especially this risk assessment would be necessary whenever they use this, algorithmic or automated decision making.
The problem with the GDPR, partially, is that it only applies to individual citizens. Many academics have been wondering what happens if you have a group of people. For example, you are trying to profile a group of people, living in a certain geographical area, which is more prone to criminal acts, and things like that. That is something where GDPR will still need to be complemented in the long run.
Chris: One question I have about the GDPR is, there was a two-year period where they were supposed to implement it. Have there been any hiccups, or has your research been modified in a way, to see how the process to implement GDPR has actually gone into effect?
Maja: Practically speaking, there have been a lot of hiccups because we are aware that many companies – especially small and medium enterprises – in Europe are not yet prepared for GDPR and that deadline of the 25th of May is fast approaching. There have been trainings offered with our university. Recently a European Centre on Privacy and Cybersecurity has been established. Our director is my colleague, Cosimo Monda. They are offering a lot of trainings for data protection officers which will ensure compliance with the GDPR within the company and big companies to the contrary to SMEs (small and medium-sized enterprises),[2] they have already introduced compliance plans. They are really trying to figure out which provisions they would have to comply with. I know that there have been files prepared for that at a high management level. I would say that for smaller enterprises there are still a lot of hiccups and a lot of problems in this regard.
Scott: Have you heard people worry about unintended consequences? For example, I was just on the phone with someone who is worried that the GDPR – they’re worried about its effect on piracy – in the sense that they’re concerned that they will no longer be able to query the ICANN’s WHOIS database to help identify music or video or other kind of pirates. It seems like this has lots of potential implications on many industries, maybe some good, some bad. Have you heard people worry about problems in specific industries?
Maja: Yes, there has been a lot of opposition from certain voices to the GDPR, because the GDPR, as I mentioned already, offers quite a high level of protection. That, of course, has a downside of what you mentioned, that if you protect everybody, even those who are illegitimately downloading online content, from the data protection perspective, you are less capable of trying to identify them and chase them.
From that perspective I would mention that there is another instrument that is quite relevant in Europe and that is the Network and Information Security Directive (NIS) directive. That is a regulation which is quite important from the cybersecurity perspective.[3] I very much believe that data protection measures should go hand in hand with cyber security, which is also a very much rising trend in this field. Yes, there has been a lot of critics of the GDPR, especially from that perspective, but also from other viewpoints.
Scott: One of the things about the GDPR, it seems to me is that, it really puts a stark light on trying to balance people’s preferences with just straight out – economic growth.[4] We know data is a tool for innovation. Companies want more and more and more data. The more data they have, the more things they can do. The more data they have, the less privacy you have. And we know that Europeans tend to have a stronger preference for more privacy than do, Americans. Do people in Europe – I know it’s wrong to call all Europeans the same – do they feel like that’s a fair trade off? Like, they know that they’re giving up something, because they feel that their privacy is more important, something they want to protect, even though it comes with certain costs.
Maja: Yes, that’s definitely a prevalent opinion in Europe. The protection of privacy and data protection which are both fundamental rights in Europe does outweigh the potential, economical – the fact that economic growth will maybe be a little bit lower, because of that high protection, is not considered an issued for European citizens. However, what I should mention is that the current economy doesn’t only run on personal data, but also runs on a lot of non-personal data. In this regard, the European Union has issued now recently a lot of documents in the framework of its digital market strategy and also the midterm review of the digital market strategy in 2017.[5] There has been a proposal for a legal act which would enable free flow of data in order to boost that economy. EU, and its institutions as well, are aware of the economic opportunities that the data offers, but not necessarily to the detriment of European citizens.
Scott: On that side of it, it’s trying to apply the same notion of free trade of goods and services to intangibles, like data.
Maja: Yes, free trade and even, free flow of data.
Chris: We’re talking a lot about the GDPR and what it means. But to get back to your paper, and your research on the algorithms behind it that you presented that AI conference yesterday, I was just wondering how your research is showing that algorithms are dictating the collection and protection of this data. Where do you see the evolution of machine learning to protect the sort of data from a European context?
Maja: The issue with the algorithms with regards to data protection is that it is quite difficult to in-build data protection into the design the algorithm. We know that Ann Cavoukian has coined the term “privacy by design” and with regard to complex algorithms, it’s quite difficult to implement that privacy by design within the algorithm.[6] I would say that algorithms can have very important social impact and social implications. For example, I know that Facebook is scraping data from Facebook accounts and trying to profile individuals, which would have a certain medical issue. I know that they have predicted even which kinds of people would be of a different sexual orientation, or would have psychological problems, and so on and so forth. Algorithms can even have an impact on democracy.
We have seen issues with regard to recent elections in the U.S., algorithms have played a quite important role with predictions of the results of elections, and with, maybe it’s quite a strong word, manipulation, but influencing, the voters as to how to vote. The thing with algorithms, is that you can really profile a person and you can really target the person, as to his or her personal needs. You can impact his freedom of choice and his freedom of decision in a way. You can see that also sometimes I’m wondering whether all the products suddenly that you buy on the Internet are going to be customized or not.
Scott: But you’re saying as if it’s a bad thing. I mean isn’t it better for people to have products customized for them?
Maja: They’re are good side and bad side. Customization I believe is a good thing but whenever you are influencing your personal choices, such as for election choices that I find problematic, because I believe that advertising in this regard should be open to everyone, and everybody should have his or her personal choice. But sometimes, when you’re targeted really for a specific product, that’s not necessarily linked to you, this algorithms can also make wrong decisions. That’s what I’m worried about.
Scott: The elections seem to be a different, it’s different than selling you soap. Usually the way we deal with that here has been that you have to disclose that it’s a political advertisement in support of so and so. And that has not been applied to online platforms yet. That’s part of the debate here how to deal with that because we have First Amendment issues on how much you can control what people are saying online, or advertisers. But we want people to know what the ad is truly about for fair elections. How are European countries dealing with that or are they?
Maja: That’s something that we have to see with regard to GDPR, how GDPR is going to be applied in that regard. So the GDPR does contain a provision which regulates algorithmic decision making which I spoke about yesterday. This famous Article 22 which is in a way if you read it quite restrictive because in principle it prohibits automatic decision making. However when you go a little bit closer into it, then you see that there are so many exceptions to that, that it looks like Swiss cheese.
In the end, I think that this Article looks very restrictive. The problem with it is, is that I don’t think it reflects the reality, because every time an automated decision is being taken, the person who is actually the target of the automated decision, has the right to object, and always has the right to human intervention. I’m always wondering, if I’m buying a flight ticket, which is priced on the basis of dynamic pricing, and I want to have a human intervention in that process, I just don’t see how that can happen.
Scott: It also seems like it would be incredibly inefficient. Ticket prices are different for all kinds of reasons.
Sarah: To go to Article 22, from my notes yesterday, I’ll just read it. Article 22 governs automated decision making, decisions “not based solely on automated processes.”[7] So, is that what you’re saying, that there is an appeal process?
Maja: Yes, those are two different things. One thing is that whenever a decision is taken on the basis of personal data from automatic processing, citizens have the right to human intervention, what we discussed just now. Human intervention should be provided in the most critical cases, and we should ensure that a human has a final decision, on the automated decision.
However, it is not always possible, if you have a question of what is less relevant like dynamic pricing, and obviously human intervention is going to be less important. And then there’s this other right, a right to object, which is not really a right to appeal, a right to object to the automated decision, which also empowers the European citizens in giving them the possibility of not agreeing with automated decisions.
Scott: But this has hold up in practice. I’m wondering how that’s going to happen.
Maja: Right, I am wondering how this is going to happen. Again we have to distinguish between automated decisions and automated decisions. If you use automatic decisions for diagnosing and then imagine that a machine learning algorithm suggests a particular treatment which would also have very strong side effects on the patient, then probably the patient should have the right to say, “Well, I don’t want that decision. I object to that decision. Can you, doctor, revise that. Can you check how this is going to be made.” But for everyday small profiling, that’s not going to be possible. That’s what I’m saying, this Article is not really realistic.
Scott: It does also seem to me that this fits with other things people were talking about yesterday. With AI, you get prediction but, at some stage, you still need judgment to know what to do with it.
Maja: Yes exactly. I think that’s a very important element. I think it would be a bad thing if humans always follow these decisions blindly, and do not revise them. I can understand that we might trust the machines more than we trust us. But there should be a mechanism of collaboration between machine learning decisions and human decisions, because somebody mentioned at the conference that, you know intuition sometimes, it’s quite an important element as well. I agree with that.
Scott: This is a really unfair question but, how are we going to make the decisions about where to draw that line? We already let machines make a lot of decisions for us. Certain elevators at large buildings some systems you select the right floor to go to and it decides which one you take. You let it decide the fastest way is to get to your floor. Or you type your destination into Waze that it tells you how to get there.[8]
Maja: Whenever a decision legally or significantly affects an individual that’s when we have to have this right. How do you draw the line between whenever a decision significantly affects somebody or not? I gave an example yesterday, if I get an online targeted advertisement to buy a car, and I followed that advertisement, and I actually buy car, does that significantly affect me? Is a very debatable question. I wouldn’t say so, but then there are more important decisions, about medical treatments or other issues. Whenever a woman is pregnant and she’s trying to suddenly take certain decisions and she’s influenced by algorithmic decision making, that’s where you are more significantly affected.
Sarah: Related to privacy, the FTC has been holding workshops and research to measure injury from privacy harms.[9] It’s an active area of research because it’s hard to put a number on those kinds of injuries, like very small injuries by the small ad that you see, compared to very large injuries. Do you know about this literature? How to measure “injury”? Or you call it “impact”?
Maja: The measurement of this “impact” should not only be restricted to privacy. There is impact on other fundamental rights. For example, on a right to dignity, if a person with dementia works together with a robot, in order to improve his or her state that can have a considerable impact on human dignity. You also have other rights that can be affected, such as freedom of expression and other fundamental rights that are relevant. We shouldn’t limit ourselves just on privacy. I believe it’s a broader societal issue, where on one hand, we have to ensure its fundamental rights of citizens are respected. On the other hand, we have to ensure the possibility of technological development because technological development is necessary and is positive in general and will bring about a lot of economic innovation and economic growth. There’s always this balance that needs to be struck between one and the other.
Scott: We’re entering a new era of trying to measure non-market activities. We started doing that with the environment 30 years ago and more, now we have this whole other set of non-market activities that have value. But we don’t know we don’t know how to quantify them yet.
Chris: Do algorithm’s rule the world? Bottom. Line.
Maja: That’s the question. It’s difficult to say with yes or no. But in a few years, maybe a decade, that could be the case. That’s why I’m always emphasizing how important that is that humans remain in control in the end. I’m not very pessimistic like in terms of Elon Musk’s view of how the world will be in a few years.
Artificial intelligence is very important and is actually the core development in the field of technology right now. But, I think that mechanisms should be in place, that access to wealth that is generated through that, is distributed in more or less equal way among people, and that might sound quite socialist, but I believe that we should not allow wealth that is generated through this analysis of data and algorithms is only kept in hands of elites. That’s very important.
Chris: I was shocked yesterday for a room full of economists, there was a lot of optimism too, for what AI holds for people. Any final thoughts you wanted to give on your paper, the conference yesterday, or topics in general, AI or otherwise?
Maja: I’d just like to say that, we are in exciting times where we are really on the verge of the fourth industrial revolution. I believe it’s very, very important what you’re doing right now here at the Technology Policy Institute is to inform ordinary citizens, and everybody who is interested in podcasts, about the implications of artificial intelligence, because I believe that people are not yet aware enough of those implications. I’d just like thank you for your good work in this regard as well.
Scott: Thank you for coming all the way over here, and presenting your paper, and talking with us today.
Chris: Thank you everybody. Now we’re going to have cupcakes because it’s Sarah’s birthday, we should have mentioned that at the top of the podcast. That’s why we have cupcakes today. Thank you all again for being here. We hope you tune in to this and all other future Two Think Minimum podcasts.