Visit techpolicyinstitute.org for more!
June 5, 2024

The Economics of AI: Prediction Machines and Their Impact with Ajay Agrawal

The Economics of AI: Prediction Machines and Their Impact with Ajay Agrawal

The Economics of AI: Prediction Machines and Thei…

The latest episode of Two Think Minimum features Ajay Agrawal, professor at the University of Toronto's Rotman School of Management and leading expert on the economics of AI. Agrawal discussed his research on how AI is drastically reducing the cost of prediction, enabling it to be applied to domains like language, driving, and scientific discovery that were previously viewed as non-predictive tasks. He explained the broader economic impacts this will have by complementing some jobs while substitute for others, and advocated for a regulatory approval process similar to the FDA to govern safe AI applications across industries in order to provide clarity and accelerate AI investment.

Liked the episode? Well, there's plenty more where that came from! Visit techpolicyinstitute.org to explore all of our latest research! 

Transcript

Scott Wallsten: Hi, welcome to Two Think Minimum, the podcast of the Technology Policy Institute. Today is Friday, May 24th, 2024, and I’m your host, TPI President Scott Wallsten. Today we have Ajay Agrawal as part of our continuing series on the economics of AI.

Ajay is the Geoffrey Taber Chair in Entrepreneurship and Innovation and Professor of Strategic Management at the University of Toronto’s Rotman School of Management. He’s also the founder of the Creative Destruction Lab, a program for scaling science-based companies. Ajay is a world-leading expert on the economics of artificial intelligence, and the author of the acclaimed book, Prediction Machines: The Simple Economics of Artificial Intelligence and Power and Prediction: The Disruptive Economics of Artificial Intelligence, both with Joshua Gans and Avi Goldfarb. He also has an ever-growing number of research papers on the economics of AI in peer-reviewed journals, and also other top publications that non-economists may be familiar with like the Harvard Business Review, The Economist, Wall Street Journal, and Science. I think he writes papers faster than I read them. Today we’ll be talking with Ajay about his work, how he sees AI affecting the economy, how policies should respond to it, and more. Ajay, welcome to the show.

Ajay Agrawal: Thanks very much, Scott. Nice to be here.

Scott Wallsten: A key insight from your work is that AI represents a drop in the cost of prediction. Tell us about that, and why that is so important.

Ajay Agrawal: Sure. First off, I think it’s just useful for people who are less familiar with what AI is to recast the rise of AI, AI getting better and better, as a drop in the cost of prediction, meaning the cost of quality-adjusted prediction is falling as AI advances. And the reason that that reframing is important, first of all, it reminds people that there’s no ghost in the machine, that this is all just computational statistics that does prediction. And really, one of the key innovations, in addition to the ones that people commonly talk about like innovations in algorithms, innovations that lead to cheaper computation and things like that, then another key innovation is reframing problems that we didn’t used to think of as prediction problems as prediction.

Ajay Agrawal: So people weren’t that surprised when we used this computational statistics approach, machine learning, for things that they were already used to thinking of as prediction problems like, for example, at banks doing using AIs for fraud detection. For example, they were already using predictive analytics for doing fraud detection. And along came this new technology that did it better. And so that was a reasonably natural application.

Ajay Agrawal: But then, when we started turning problems that most people never thought of as prediction, like driving, most people 5 years ago would not have characterized driving as a prediction problem. But that’s how we’re solving it today.

Ajay Agrawal: Translation, most people, when people were at school and they were learning how to translate between English and Spanish, they learned all the rules to go between one language and the other, and then all the exceptions to the rules. But that’s not how AIs do it. We feed AIs pairs of professionally translated documents, thousands and thousands that are professionally translated by multinational corporations, or the United Nations, or organizations like that.

Ajay Agrawal: And then, when we give the AI a sentence in English that it’s never seen before, it predicts what that sentence will look like in Spanish. We’ve turned it into prediction, and so many other things – vision. Most people never thought of vision as a prediction problem.

Ajay Agrawal: Probably one of the first surprising applications of this modern revolution of using machine learning, deep learning in particular, out at Stanford, in their competition of ImageNet, was turning vision into a prediction problem. So identification of items in a picture like, what’s a lamp? What’s a door? What’s a horse? What’s a dog? And once we realized that vision was, in fact, could be solved with prediction, that meant we could use prediction to allow machines to see. And if machines could see, then we could have things like cars that drive themselves, or machines that can do much of what a radiologist would do when reading medical images.

Ajay Agrawal: The idea of when prediction gets cheap, then two things happen. One, everywhere that we used to do prediction, we can just do more of it, better, faster, cheaper. So that would be things like fraud detection at banks.

Ajay Agrawal: Second, we can turn problems that we didn’t used to think of as prediction problems into prediction, because now it’s so much cheaper and better than it was before. And that’s things like driving and translation. And most recently, the production of language. So probably most people know when they use something like ChatGPT, or one of its competitors, that that thing is a next-best-word predictor. So you type in your prompt and then what the AI does, the generative AI, is it generates words. It’s drawing them from a probability distribution, a model of the world that it’s built from reading the Internet of predicting what’s the next best word in a sequence, and has become in many cases indistinguishable from human. So all of these things are resulting from prediction getting cheaper.

Ajay Agrawal: I’ll just make one other point which is in economics, one of the first things that students learn is downward-sloping demand curves. And when you have a downward-sloping demand curve, when something gets cheaper, we use more of that thing. In this case, as prediction gets cheaper, we use more prediction in ways I just described. But something else happens which is to do with a thing that economists call the cross-price elasticity between complements and substitutes.

Ajay Agrawal: When things are substitutes, then when one thing in substitutes, meaning we can use one thing or the other, when one of the things gets cheaper, in this case, machine prediction gets cheaper, the value of the substitutes goes down. So, in other words, let’s say, the value of human prediction goes down. Human prediction is a substitute for machine prediction.

Ajay Agrawal: But the value of complements goes up, and so complements are things that we use together. So, for example, golf clubs and golf balls are complements, and so when the cost of golf clubs goes down, then we buy more golf clubs, and when we buy more golf clubs, then we want more golf balls, and so the value of golf balls goes up when the price of golf clubs goes down.

Ajay Agrawal: Here, when the price of prediction, the cost of prediction falls, the complements to prediction, the things that we use along with prediction, the value of those things goes up, and so that includes stuff like when the cost of prediction goes down, because we need data for training AI models, that’s a complement. Judgment – AIs do prediction, but we use prediction in combination with judgment to make decisions. So the value of human judgment goes up, the value of the things that take action as a result of the prediction, that’s another complement. The value of those things goes up.

Ajay Agrawal: The things that take actions, which sometimes are people, sometimes are like relationships, so like, for example, commercial relationships where somebody’s making a prediction, and that affects an action in a commercial relationship. And so the value of those actions goes up, and so on.

Ajay Agrawal: Two main things to have in your mind. One is when the cost of prediction goes down, we use more prediction, and two is that that also affects the value of other stuff. It lowers the value of substitutes to machine prediction, and it increases the value of complements.

Scott Wallsten: I want to come back to the complements and substitutes again in a minute, because obviously, it’s relevant to labor market questions. But going back to the idea that there are things that we now think of as predictions that we hadn’t thought of that way before, was this a revelation even to people who worked on AI previously, and they knew that the processing costs were too high to apply it to most things? Or was it also a surprise to those people, realizing that suddenly much of the world could be seen as prediction problems?

Ajay Agrawal: I think it was a surprise to most people, including people who were deeply, who had spent their life’s work in this area on the computer science side. Because I think many people were surprised by the range of things that could be recast as prediction problems.

Ajay Agrawal: They were also surprised by how well prediction could work on certain things. So even, for example, even people who had spent their life’s work in a subfield of AI, the language people, that learned is known as the NLP (natural language processing) community, the natural language processing community. Most of them were surprised on November 30th of 2022 when ChatGPT came out, they were surprised at how good it was. In other words, if you had asked most of those people a month before, “How long will it take for us to pass the Turing test?” and depending on who you talk to, we’ve now passed that where you can, the basic mental model people can have in their mind who aren’t familiar with the Turing test is you have a person sitting and typing something, a message, and in front of them is a curtain, and behind that curtain is a person and an AI, and so their message goes over the transom, and somebody replies, either the human replies or the AI replies, and they have a back and forth, and when the human who’s on by themselves on one side of the curtain can’t tell whether their responses are coming from a human on the other side of the curtain or an AI, then you’ve passed the Turing test. It’s indistinguishable. So the AI is indistinguishable from human.

Ajay Agrawal: To the extent that ChatGPT, that came out, that was version 3.5, in many cases was indistinguishable from human, it passed a test, and I think many people in the NLP community were surprised, and what they were surprised by was how good the model became when they scaled it.

Ajay Agrawal: OpenAI had been working on this for quite some time. They had GPT-1 and GPT-2 that were fine, but they were not anywhere close to passing the Turing test, and then they scaled significantly, like by a couple of orders of magnitude, and nobody knew how much the performance would improve just by making the model so much bigger. And so I think to answer your question, that’s a very specific example of where people were surprised that you could effectively solve the production of language in terms of sounding indistinguishable from human using prediction. And now we’ve seen it with pictures and with videos, and with such a wide range of things that I think many… I’m sure there’ll be some people say, “Oh, no, I knew that all along,” but I think overall the technical community has been surprised.

Scott Wallsten: You’re no newcomer to studying AI, writing about its importance for prediction well before ChatGPT was the thing, years before, and so was it a surprise to you how well it performed? And was it closer to a generalized model than you expected we’d be at that point?

Ajay Agrawal: Yes, in terms of the generative AI capabilities in all, in all of these domains, like writing and music and pictures, as well as things that I think may be more important, but are much less discussed like in science. AIs that are predicting the structure of molecules for certain types of applications, things like that in material science and drug discovery are quite surprising. In other words, we had imagined these things in our book in 2018, but we certainly didn’t imagine that they would come as quickly as they have.

Scott Wallsten: Catherine Tucker has written that one of the things that makes AI unique, at least in the short run relative to other digital goods, is that the marginal cost isn’t zero, and that there is a real cost to these predictions, and the bigger the models are and the more complicated they are, the more costly it is. I guess, first, do you agree with that? And I hope I’m not misrepresenting what she said. But if so, do you think that there is a plateau that we’ll get to at some point where the cost of prediction won’t continue to fall?

Ajay Agrawal: Okay, so there’s a few questions bundled in there. Let me try and decompose that. First on this excellent question. First of all, I agree with Catherine, at least in the short term, like, in other words, in the short term, everybody who’s now trying to build applications using these foundation models is incurring these inference costs, and every time they run a prediction, it’s costing something, and sometimes the benefit is so high that the cost is fine, and other times, they bump up against limitations due to the cost, depending on the application.

Ajay Agrawal: First of all, it really depends on the type of prediction. So, for example, I suspect most, many of your listeners will regularly use AI’s navigational AIs that predict the best route between two places on something like Google Maps or Waze or something. And we use that with abandon. And we don’t worry about the cost of those predictions because they’re served up to us for free.

Ajay Agrawal: This marginal cost of prediction really applies to certain types of predictions like, in particular, the cost of generating inferences from generative models. I think a way to think about this is, as I mentioned, the community was shocked when OpenAI showed the world what you could do with a very large, powerful model in their public release of ChatGPT. And what happened with that shock is that all of a sudden, people were surprised by the power of what you could do with a very big model, and so rushed out to build big models.

Ajay Agrawal: Which is why Jensen is so rich now, like in words, it’s very hard when all of a sudden there was a like this shocked the world in terms of your capability. Everyone ran out to get GPUs (General Processing Units) so they could build these big models and the suppliers… So the demand shop but supply could not ramp up so fast because you have to build, Fabs, to make these things. That takes a lot of time.

Ajay Agrawal: Two things happened. One is, in video stock went through the roof, and two, as we became much more sensitive to the geopolitics around Taiwan because, just the scarcity of this, the supply relative to demand went so far out of whack.

Ajay Agrawal: Now we have the Chips Act. We have all these things, a lot of investment in increasing supply, but that just takes time. So as those new sources of supply come online, and we start producing much more compute capability, the cost of these predictions will start coming down again because the supply will become much more, much greater.

Ajay Agrawal: And ultimately you could imagine these things become commoditized and in the end what we’re left with is the cost driven by the cost of electricity.

Ajay Agrawal: The way to think of the way I think about this in the long, in the short term, there’s these marginal costs, exactly as Catherine describes in the long term, like what I’m trying to imagine, what will the economy look like? In other words, if we want to start building infrastructure for the future economy, let’s imagine what that economy looks like when we have a much greater supply of compute.

Ajay Agrawal: At that point the compute’s no longer the bottleneck. Now it’s electricity. And so then, we have to imagine what is, how does the electricity market evolve and whatever that is ultimately will be electricity equals intelligence and if we project forward, like, everybody who’s in that business has seen what’s happened on compute, and they realize that once the compute ball net goes away, then all the demand rushes to electricity and so already providers are starting to react to that in terms of their investment plans.

Ajay Agrawal: I suspect that in the long run or even in the medium term, the marginal cost of these types of predictions comes down quite a bit. We are just right now experiencing this impact of the shock where the demand went up very fast, because everybody was surprised by how much power you got from scaling your model and the supply can’t react so quickly. And so there’s an adjustment period.

Scott Wallsten: Right. I was trying to think of how to whether to segue that into a question of discussion of how electricity markets develop. But I think that’s a whole other topic that don’t want to go into but people… Well, people are worried about job displacement, maybe economists less so, although I certainly found that ChatGPT could do a lot of my job. So how… What’s the right way to think about this question of how AI will change the composition of the labor market?

Ajay Agrawal: It’s a very good question and I’ve really now seen what I would view as very smart people, and by smart people, I mean people who have kind of devoted many years of their career studying and trying to understand at a very deep level how technological change impacts labor markets. And even within that crowd there seems to be an increasing spread of opinions of what’s going to happen.

Ajay Agrawal: Let me preface everything that I’m going to say, as I don’t see any consensus even amongst the people who have spent their careers studying this of where they think it will go. It ranges from the “This is no big deal” crowd, and that crowd, how that part of the community thinks about all this is, they say, “Look, we’ve seen this all before,” and they start off by talking about agriculture. And they remind us that over 50% of us used to work in agriculture, and that that sector got highly mechanized. But we don’t have like… Today’s less than 2% of us that work in agriculture. Yet that doesn’t mean there’s 48% of us unemployed that all these new jobs were created in new field and it increased our standard of living, and the economy evolved and adapted.

Ajay Agrawal: That part of the community would say, “You know, this is, we’ve seen all this before, and don’t believe the chicken littles who say the sky is falling with regards to employment, because the economy adapts.”

Ajay Agrawal: There’s the crowd in the middle who say “No, this is different but it’s not necessarily bad.” And I would say this is a reasonably new view that a number of people have. And it goes like this, which is AIs quite possibly will reduce income inequality. And the reason that they’re going to reduce income inequality is because over the last 50 years, as we’ve introduced various forms of automation and computerization, it’s had this general effect on the economy of what people call skill-biased technical change. And the main idea of that is think of computers benefited everybody. They made everybody more productive. It didn’t matter if you were a surgeon or a janitor, it made everybody more productive. However, it disproportionately benefited people with more skills. So computers benefited everyone, but they benefited surgeons and lawyers more than they benefited janitors and truck drivers.

Ajay Agrawal: The skill-biased technical change meant that it was biased towards people with more high-skilled jobs. It created this increased spread in income in the income distribution, whereas AIs are having the opposite effect where AIs benefit everyone but they benefit people at the lower end of the skills distribution more than the upper. It’s compressing the wage distribution.

Ajay Agrawal: The example you can have in your mind, your listeners can have in their mind is, one very selling example for people is even within taxi drivers. So, for example, if you want to drive a taxi in the City of London, you would have to go to school for 3 years to learn what’s called the knowledge.

Ajay Agrawal: And so, people spend a lot of time. They study maps of London, and then they ride mopeds around London, learning all the one ways and how the city is laid out. And then, at the end of their 3 year program, they have an exam that they have to pass where they get asked a bunch of questions like, it’s 4 o’clock on a Tuesday afternoon in November. You pick up a passenger at the Churchill War Rooms, and they want to go to the Royal Botanical Gardens. What’s the fastest route? And they have to answer the question.

Ajay Agrawal: Today, you or I could fly into Heathrow, rent a car and open up a navigational AI, and we’ve never set foot in the city of London, and we can navigate the city as well as a person who went to school for 3 years.

Ajay Agrawal: And so that’s enabled, for example, before Uber, there were about 200,000 professional people who drove in taxis or limousines, 200,000 people. Today there’s between 3 and 4 million people that drive for Uber. So all these people that had no professional training of like knowing the city are able to pick up a passenger, and efficiently transport them from A to B. It’s upskilled all these people, or some other might say, it’s down skilled the job.

Ajay Agrawal: That compresses the wage distribution in that sector. And there’s been other studies in other areas where they introduce an AI into a job category and it helps everyone be more productive. But it helps the highest skilled people a little bit, and it helps the lower skilled people a lot more. It kind of levels, it kind of gets everyone up to a similar level. And so that has this compression effect on the wage distribution. So there’s that crowd who thinks, oh, this could be overall good. It increases productivity for everyone, and it reduces income inequality.

Scott Wallsten: Although even in that example, though, there are the 200,000 cab drivers who went to school for 3 years whose knowledge is now useless. And with the millions now who could drive, the benefits clearly in a societal sense outweigh that. But it’s still a well-defined group of people, smaller group of people, who did lose their jobs, right?

Ajay Agrawal: Yes, and I think that’s very important, Scott. I think as a society, we are going to be faced with some version of that all over the economy where we’re going to have to make decisions policy-wise of are we willing to make the trade where, exactly as you just said, that overall society is probably better off but there are specific communities that are harmed, in this case, the taxi drivers.

Ajay Agrawal: I think most people would say society’s better off with these services that put many more cars on the road, and much more information that we have about traffic patterns and all the stuff that we benefit from, and so much more information. And, like you, get much more detailed receipts and all the stuff. But there was a community of people that paid a price for that.

Ajay Agrawal: We can imagine that right up and down the stack in terms of different profession types, whether it’s jobs like lawyers or doctors, or nurses or professors, salespeople, there’s and it’s very hard to anticipate. Like, if you had asked 10 years ago which is going to be the sector that’s first kind of impacted, no one would have guessed taxi drivers. Like nobody would have guessed that they’re going to be one of the first that are going to get impacted.

Ajay Agrawal: It’s very hard to anticipate these things, and like I say, I’d say the experts are at the moment all over the map on how they think this will unfold.

Scott Wallsten: I think I actually interrupted you as you were about to go to the third group, which I think were the chicken littles.

Ajay Agrawal: The third group is the chicken littles, and they’re the ones who are talking a lot about what’s the feasibility of something like UBI, universal basic income, if we have machines that start coming in and it’s just so much more efficient for society to operate where machines are doing a vast majority of the work. And so then, how do we create a society where there’s two issues when machines could do virtually all the work.

Ajay Agrawal: One issue is wealth distribution. In other words, the only reason we’d have machines doing work rather than people is because they’re more productive. And so that they can produce more output per unit cost than we can with people. So that means if we were to have machines doing most of the work, we’re doing that because it makes the pie bigger, that overall society is getting more benefit or productivity than when we have humans doing the work.

Ajay Agrawal: Then that leaves two questions. One, well, we’ve got a bigger pie, but how do we distribute it? And like one thing about people doing work is that people do work with their bodies, and the kind of neat thing about that is, everyone’s got one. Everyone’s got a body. And so it’s a crude way of distributing wealth. And of course, we’ve got a lot of wealth inequality, but it’s at least there’s some modicum of distribution mechanism, because everyone’s got a body. If you don’t have to use your body for work, and the machine’s doing it all, then we need a new way to distribute that bigger pie. So that’s issue number one. And so far, there’s various permutations of universal basic income, or some way of distributing the wealth. And I think most people find that isn’t hugely satisfying. People are worried about what that would look like.

Ajay Agrawal: The second issue is separate from distribution, it’s purpose or meaning. Many people take the view that a lot of people derive their purpose or meaning from their work. And so how would we function as a society if we didn’t have work. Now, in that second category, I’ve heard a lot of compelling arguments that there’s nothing innate about that. In other words, we’re not born with a gene that says we get our purpose from work. We start indoctrinating society with that right from kindergarten, and we design our school system and everything around preparing people for work.

Ajay Agrawal: But if you ask most children, they can find a lot of purpose and excitement in work. And so it’s not obvious that our species requires work for having purpose. So that one seems, you could imagine a world where we adapt to that issue. The distribution one, I think, is still an open question.

Scott Wallsten: In some of your work, you’ve pointed out that I think big changes happen when we see sort of systemic change and the ability, the bringing AI into lots of aspects of a production process, whatever that may be, and that we haven’t quite seen that yet. How does that play into this discussion? Does it mean that we’re further away from having to actually deal with the labor market question? Or as long as we’re focused on, or as long as people are using AI to do specific jobs, it’s more of an issue in the short run.

Ajay Agrawal: I think in the short run, I don’t think the labor issue is going to be that it will… Certainly, I’m sure there will certainly be pain for kind of narrow pockets of people like what you described with the taxi drivers, but overall, I think in the short run, it’s very hard for companies to do this what we in our book called system-level change like it’s where it’s a complete redesign of the system, so that Uber is an example of a complete redesign of the taxi system.

Ajay Agrawal: And much, much more prevalent and as far as I can see, in most industries most are heading in this what we call point solutions rather than system-level solutions, which is keeping the way the industry operates more or less the same, but just adding AIs as productivity-enhancing tools.

Ajay Agrawal: Analog back to the taxi case would be if we had a navigational AI that helped you predict what’s the optimal route between two points and we gave that to the taxi drivers. So in that case, the taxi system will stay identical to the way it’s always been. The taxi drivers would become a little more efficient in taxi driving. There’s actually two predictions that make drivers more productive. One is predicting the best route between A and B, which everyone is familiar with when they use something like Google Maps or Waze. The second key prediction is when you drop off a passenger, then where should you go next to minimize the time with an empty taxi. So like, what’s the where should you go to optimize the likelihood of getting a next passenger as fast as possible? And so, if you just had an AI that did those two predictions and gave them to taxi drivers, that’d be a point solution where the system stays the same and we just make… We put in a prediction tool to make it more efficient.

Ajay Agrawal: The vast majority of the companies that we are seeing right now that are working towards bringing AIs into their operations are doing that type of point solutions. And so they make things more efficient, and it will enhance some jobs, it might eliminate some jobs, but it does not appear to be any kind of massive labor transformation. It can overall have very meaningful productivity benefits for the overall economy. So it’s not to minimize that at all.

Ajay Agrawal: But that’s very different than a system-level change where we completely redesign, for example, how we deliver health care, completely redesign how we deliver education, completely redesign how we do retail the way that the transportation industry was totally redesigned.

Scott Wallsten: There are a lot of interests that will want to keep that from happening, though. I mean, there was with taxis, too. But they didn’t quite know what hit them.

Ajay Agrawal: Yes.

Scott Wallsten: So you have a new paper talking about AI for generating hypotheses and in the paper, you lay out how this is, you know, how this is prediction, and so on. But as I was reading it, it struck me that it seems like this is moving more into the judgment and creativity area. I mean, that’s kind of how I think of hypothesis testing. When you what’s worth testing? Do you think even that can become a form of prediction?

Ajay Agrawal: Yes, so just the same way that we never thought of driving as prediction, we never thought of language production as prediction, have never thought of scientific research or hypothesis generation as, you know, they think of that as like the creative part of scientists, like scientists that have this creative genius, but it is, it can be recast as a prediction problem.

Ajay Agrawal: So if we’re trying to… If you can imagine, think of a hypothesis generation as combining ideas. And so, let’s say we know a thousand different facts. And we have hypothesis that okay, if we combine fact one, fact three, in fact 19, that this is a hypothesis of like, how you could design a drug to do X, or how we can design a material that has these properties, or how we could design a policy that will have this kind of effect on the behavior of people.

Ajay Agrawal: You could imagine combining like many things. So you could have in some sense, an infinite number of hypotheses, a vast number when you think of all the combinatorial opportunities that there are from things that we know.

Ajay Agrawal: Where the prediction comes in here is predicting the likelihood of success from any combination. And so we have normally thought of that as like a creative genius process where some people just creatively combine things in a way that no one’s thought of and they come up with a hypothesis that changes like a scientific paradigm or discovery of something.

Ajay Agrawal: There’s so many to choose from that when somebody comes up with something that turns out to be correct, it feels like scientific genius. What the AIs do is they say, okay, there’s all of these hundreds of thousands or millions of possibilities, and it predicts the likelihood of each one of those being successful and then rank orders them.

Ajay Agrawal: It turns that process that we used to think of kind of creative spark or creative genius into simply predicting the likelihood that any combination will be correct. And we’re already starting to see evidence of that being a valuable approach to science. In other words, there’s applications of that type of approach in a number of domains that’s starting to yield results.

Scott Wallsten: I mean, we think of prediction AIs for prediction and judgment and creativity as for humans. Is that not going to be the case? What comparative advantage do you think humans have inherently?

Ajay Agrawal: Yeah, that’s a great question. So it’s still, like, in other words, we, despite characterizing scientific discovery as prediction, it doesn’t in any way change our view that all the advances that we’ve seen in AI are still advances in capability to do prediction. There has been zero advance in machines having judgment. Only people have judgment.

Ajay Agrawal: The question is, well, what things are judgment and what things are prediction. So, for example, judgment in this, in the domain of science is what should we invent in the first place and what’s the cost. So another way to think of it is, what’s the cost of a mistake.

Ajay Agrawal: If we generate these predictions, and we apply the prediction to make a decision, and that decision can lead to an outcome of one outcome or the other. Because, remember, these are predictions. So that means they’re being drawn from a probability distribution. We know if they’re right, what’s the cost of a mistake, and putting weights on those costs of a mistake is judgment. And so that is still totally in the domain of people. So let me just give you like an analogy to think about how people like your listeners frame a little bit of how we do this.

Ajay Agrawal: You know, I teach in a business school and 30 years ago, if you came to the business school, one of the most common popular areas inside business was accounting, and so many students would come, and they would primarily focus on accounting. And one of the key skills that the students developed when they were taking accounting classes was doing arithmetic in their heads. And so a homework assignment would be something like, “Go to page 37 of the phone book and add up all the phone numbers.” And it was to practice adding, and adding up the numbers and carrying the one and all this stuff.

Ajay Agrawal: For your listeners who are younger, a phone book used to be a book where you would have everyone’s phone number who’s like in that city. Anyways. So then along came spreadsheets and spreadsheets started doing all the arithmetic.

So all the accountants who had developed, invested in the skill, the muscle of doing addition and subtraction and long, long form division, multiplication in their heads, the value of that skill diminished significantly. But we still have lots of accountants.

Ajay Agrawal: Now, what do those accountants do? Because now that machines can do all the addition and subtraction, they are applying judgment. They’re deciding what should I ask the computer to calculate? And when it calculates and spits out the answer, what should I do with the answer? How should I present it? What are the important questions to ask of my data? That’s what the accountants do now. Accountants now do no addition, subtraction in their head. They do all their work by supplying their judgment. And that will start to follow that kind of transition across professions with regards to prediction, that machines will do the prediction, and our job will be to apply the judgment.

Ajay Agrawal: If you had told accountants 30 years ago, “You’re not going to like, no one’s going to be doing this,” they would have thought, “There’s not going to be any accounting jobs left, because that’s all we do.” They didn’t realize that even though they spent most of their time doing calculations, they were doing another thing, too. It was just much smaller, because the additions took up so much of their time, is that they were also having to figure out what should I even add up and subtract in the first place.

Scott Wallsten: So we’re running out of time. But I want to ask you quickly about policy. Policymakers feel the need to do something, anything. What’s the right approach? Is there any legislation you’ve seen that seems particularly good or bad, like the AI Act in the EU? Chuck Schumer just announced supporting 32 billion dollars for more research in AI, though it’s hard to see that there’s a lack of money going to research. And then people talk about guardrails which really doesn’t mean anything in the abstract. What do you think? What’s a productive path forward for policymakers?

Ajay Agrawal: Sure. So I would, if I got to work on one policy area for AI, it would be to design an equivalence of the FDA (Food and Drug Administration) for every regulated industry that is going to be adopting AI. So let me explain what I mean and the reason for that would be to create markets, to have more AI, not less. So there’s one like, we are… You and I have grown up in a world where almost every key product area that we are used to, that we are familiar with, is deterministic. So like very simply, for your listeners who are thinking about this mental model you can have in your mind is like, think of a light switch where, when you go and you flick that thing and turn on the light, we know exactly what’s going to happen.

Ajay Agrawal: You flick it on and the light doesn’t go on with 80% probability. You’re not drawing from probability distribution of like, what should we do when you flip the light switch, it just goes on. It’s deterministic. Almost everything we’re used to in the economy, all the products and things we built are deterministic.

Ajay Agrawal: There are very few things that are probabilistic. But one thing that is mission critical, life and death, that is probabilistic are drugs. So when you and I put a pill in our mouth and we swallow it, nobody knows exactly how it works.

Ajay Agrawal: And that’s why we have a process where we start with mice, and then we go up to large mammals, and then we go up to people. And then we give people these drugs. And let’s say we give it to a thousand people in a randomized control trial and of the treated people, let’s say 990 of them either have no effect, or they have a positive effect, but 10 of them have a terribly bad effect. They get very, very sick.

Ajay Agrawal: Now the problem is, we don’t know why they got sick. We don’t know if we give it to another 1,000, we don’t know which 10. We know that on average, 10 people are going to get very sick. We just don’t know which 10, and the reason we don’t know which 10 is because we don’t know exactly how they work.

Ajay Agrawal: But what the FDA does is, they say, okay, does the benefit to society outweigh the costs? So is the benefit to the 990 great enough that we’re willing to accept the fact that 10 are going to get harmed? We just don’t know which 10.

Ajay Agrawal: And that’s why, when you and I watch TV and a drug commercial comes on, they describe the drug, and then at the very end, somebody talks at 100 miles a minute, and they tell you here’s all these side effects, and they go through this stuff because they have to basically tell you this is a probabilistic product. Here’s the possible side effects. We don’t know which 10 of you are going to have this, but 10 of you are going to have one of these side effects. Okay, so we’ve designed a method for dealing with probabilistic stuff.

Ajay Agrawal: Now, before we had the FDA, we had a very anemic pharmaceutical industry. Nobody would invest a billion dollars into developing a drug because customers couldn’t tell the difference between a real therapeutic and snake oil.

Ajay Agrawal: But once we created a process where we had a trusted party, the government, through the FDA testing drugs and determining which ones were overall safe and which ones weren’t, the drug market took off. We started having companies invest way more in the development of drugs because we had a third-party verification verifier.

Ajay Agrawal: Now the same will be true for AI. If we created the equivalent of an FDA that effectively took AIs, whether they’re AIs in the financial services industry, AIs in the healthcare industry, AIs in transportation, health care, financial services, all the regulated industries, and we ran them through a wide battery of tests to see how they perform in this case, in this case, in this case. So, in other words, the key is designing the method just like we’ve designed the methods for drugs, for testing as many of the edge cases as possible, and then determining okay, in all these settings, it works like 98% of the time, it has a great outcome, and 2% of the time it has a bad outcome. We don’t know exactly why, but we try and get it to work in as many of the cases as possible, and when the benefits way outweigh the costs, then we approve.

Ajay Agrawal: And so, if we were to develop that style of testing and verifying these probabilistic tools, then I think we could accelerate bringing them into production and it would benefit society. It would attract more private sector investment into development of these AIs, and it would, I think, move society forward much more quickly.

Scott Wallsten: So I mean, when we talk about government proposed government rules, they’re supposed to do cost-benefit analysis, which is trying to do with some probabilistic predictions for the future. I mean, we tend not to do them or don’t do them very well, but it is part of policymaking. But what sounds to me like what you’re describing, though, would be an enormous bureaucracy. I mean, you’ve got your Creative Destruction Lab. Would you want to tell all of your companies there that they can’t put out their product until some agency has run it through a sophisticated cost-benefit analysis?

Ajay Agrawal: Many of them would love it. They would, it would help them. It would clarify the market. Private investors are hesitant to invest in anything where the rules are not clear.

Ajay Agrawal: And so now that it doesn’t mean they won’t. And we’ve all seen all the capital piling into, for example, companies like OpenAI and Anthropic, where the rules are unclear, nobody really knows what the rules are regarding copyright, nobody really knows about what the rules are against when you can use AI-generated text for this or that or whatever. And so that’s an area where people have plowed in capital with unclear rules. But in many applications, our startups, Creative Destruction Lab is one of the largest programs in the world of its type. We have thousands of applications every year. We accept 700 startups a year into the program. They’re all science-based companies, and at least half of them are applying some form of machine intelligence to whatever problem they’re working on. It could be in energy, it could be in healthcare, it could be in material science, lots of different areas.

Ajay Agrawal: And for many of them, one of the key barriers is a lack of clarity around the regulatory environment. And so to the contrary, if there was clarity around here’s how your AI is going to have to pass these regulatory hurdles, then that would clear up a lot of the uncertainty. Everyone would have a very clear target of this is what we’re going to have to do, just like they do in biotech, and it would accelerate investment in these areas, not decelerate it, and the entrepreneurs would love it.

Scott Wallsten: So let’s play that out a little bit. What would that mean for a chatbot? Who would test it? What are the criteria for having it approved?

Ajay Agrawal: You can imagine that first of all, some sectors are regulated, some sectors are not. Let’s say, a chatbot that’s doing customer service, customer service right now for the most is not really a regulated industry. If you have humans right now that make mistakes, that make false claims, that do a variety of different things, and so when those humans that make mistakes are replaced by AIs that make mistakes, they’re in the same kind of unregulated territory as the humans were.

Ajay Agrawal: If you were to ask me like, should we regulate that industry? I would say it doesn’t seem like that, like society chose not to regulate that when it was done by humans, it’s not obvious why we’d want to regulate it. In other words, there’s market forces that do their thing. And so, for example, right now there was a very kind of a high-profile case in Canada, where an AI chatbot that was doing customer service for the major airline gave a customer some wrong advice.

Scott Wallsten: I remember that.

Ajay Agrawal: Yeah. And so now that case has gone to court. And so that is having kind of an effect on that market of chatbots being used for customer service, because now companies are questioning like, what’s my liability for using this AI to do my customer service work? And so I would say, the market’s doing its thing, and I’m not sure that we need government intervention there.

Ajay Agrawal: But in areas where we already have government oversight, like, for example, in insurance, in using AIs to predict the risk of different people for life insurance and home insurance and car insurance, that’s already regulated to prevent, for example, bias against certain protected classes and things like that. We should apply the same kind of rules that we have to AI that we currently apply to people who are making underwriting decisions to AIs who are making underwriting decisions and just recognizing the probabilistic nature of these AIs.

Ajay Agrawal: And I can, because I’ve had some close-up look at the insurance market, I believe that the insurance industry would accelerate significantly their use of AIs if there was more regulatory clarity there.

Scott Wallsten: So it’s not really so much of the government approving AIs per se, it’s deciding that we have these rules, and they’re still going to apply and so, design your AI so that it does, or else you’ll be prosecuted the same way you would be if it were people.

Ajay Agrawal: I think it’s one more step beyond that. It’s design your AI, and then send it to us and let us run it through our series of tests. And so it’s like the FDA, but it could be much more efficient because you don’t need human bodies, and you don’t have to wait for so long to run these clinical trials. Everything’s done in simulation. So you send in your AI, it’s run through a whole bunch of tests, it’s all software, it’s much faster, much cheaper to administer than it is when you’re doing FDA approval for drugs. But it’s still a process, and it’s still a test. And then every time you update or change your AI, you send it in to go through its approval process.

Scott Wallsten: Since you mentioned Canada, just one quick thing at the end. So you, Josh, Avi, all in Toronto? What is it with Toronto? I mean, did you all… Were you all sort of separately interested in this? And it was coincidence that you were all there, or was it kind of from your interactions together that you sort of built up this interest? Is it an example of local spillovers or just coincidence? What’s going on there?

Ajay Agrawal: Yeah, great question. And it’s all of those things in the sense that Avi, Joshua, and I created this thing, the Creative Destruction Lab. We just serendipitously launched this in 2012, and the mission of Creative Destruction Lab was to enhance the commercialization of science for the betterment of humankind.

Ajay Agrawal: We had these basically science founders who came into the program. And in our very first year of operating, it was 2012, and that was also the year that a magical thing happened, which is a professor at University of Toronto, named Geoff Hinton, who was one of the pioneers of artificial intelligence, of machine learning.

Ajay Agrawal: He sent a couple of his students to compete in the contest run by Fei-Fei Li, a professor at Stanford called ImageNet. o Fei-Fei had spent a significant fraction of her career betting on the primacy, the importance of data, of labeled data for generating intelligence. But she had amassed this very large dataset, but she had not figured out how to really leverage the data set. She had made a big career bet on the importance of data but didn’t have the algorithm. And she ran ImageNet for one year. And basically, someone had given her this idea, “Well, why don’t you outsource and see if someone else’s algorithm can demonstrate the power of your labeled data.”

Ajay Agrawal: She ran it for one year, and got nothing earth-shattering. And then a second year she got better results and nothing earth-shattering. Then the third year of 2012, these grad students from Toronto came and brought their new technique called deep learning. And deep learning had been quite compelling, but had not been that compelling, because they’d never been able to use it on something that had that much labeled data. So you took this novel algorithm from Toronto and this novel data set from Stanford and put them together, and for the first time showed the real power of both data combined with this algorithm type.

Ajay Agrawal: That same year, 2012, another student from Geoff Hinton’s lab in Toronto came into our very first cohort at the Creative Destruction Lab where Avi and Joshua and I were working. His name was Abe Heifets, and he said, “Hey, we’ve got this new technique from our computer science lab called deep learning. I want to use it to predict which molecules will most effectively bind with which proteins to change the way we do drug discovery.”

Ajay Agrawal: So he came into the lab. That was our first introduction to this new thing called deep learning. The second year, a bunch more grad students came. One was doing, using the same technique to do early detection of brain degeneration like Alzheimer’s from 30-second snippets of voice recordings over the phone.

Ajay Agrawal: Someone else was using it to do early detection of fraud in financial transactions, and someone else was using it to do detection of criminal activity from security cameras. And so all of a sudden, we were like, “Whoa, this one technique can be used across all these different problem sets.”

Ajay Agrawal: We saw the first example in 2012, a bunch more in 2013, and then Avi and I had a sabbatical the same year, in 2015. We came to Stanford for a year, and sort of saw what was starting to happen in the Bay, and that became, we took the ideas back, met with Joshua, and that became the seeds for our book. So I think it’s the serendipity of all of us being together, the application of our research that we had been doing in this Creative Destruction Lab, and also the proximity to the Computer Science Department at Toronto, which is where some of the very pioneering work in AI was happening. All those things together was the kind of serendipity that led us to start working. I think we were amongst the first in our field in economics to really lean into the possibility of what machine learning would bring to overall commercial activity.

Scott Wallsten: That’s fascinating. I mean, it’s funny that it was serendipity and coincidence that led to all this work on prediction.

Ajay Agrawal: Could not have…  

Scott Wallsten: Couldn’t predict it.

Ajay Agrawal: Yes, yes, exactly. That’s a great irony to wrap up our conversation.

Scott Wallsten: Yes, thank you so much for talking with me. That was really interesting. I really appreciate your time.

Ajay Agrawal: Great, Scott! Well, thanks for your interest in our work. I really appreciate that.