Visit techpolicyinstitute.org for more!
Sept. 27, 2022

Mike Rosenbaum on Using Data, AI, and Machine Learning to Avoid Biases.

Mike Rosenbaum on Using Data, AI, and Machine Learning to Avoid Biases.

Michael Rosenbaum is founder and executive chairm…

Michael Rosenbaum is founder and executive chairman of Catalyte, a recognized leader in onshore agile application services working with clients onsite or from development centers in Baltimore, Md. and Portland, Ore., and CEO of Arena. Prior to starting Catalyte, Mike received an Irving R. Kaufman Fellowship to build the first version of what is now the company’s analytics engine for talent selection and team assembly. Prior to that, he was a John M. Olin Fellow at Harvard University where he researched, wrote and taught on economics and law. Michael is also the CEO of Pegged Software. Michael is a frequent national speaker and contributor, sharing insights and advice on IT strategies and practices as they relate to application development, innovation, bi-modal sourcing, Agile, big data, onshoring and domestic sourcing. He has a JD from Harvard Law School, an MSc in Economics from the London School of Economics and Political Science and a BA from Harvard College.

Liked the episode? Well, there's plenty more where that came from! Visit techpolicyinstitute.org to explore all of our latest research! 

Transcript

Scott (00:00):

Hi, and welcome back to Two Think Minimum, the podcast at the Technology Policy Institute. It's Thursday, September 1st, 2022, and I'm Scott Wallsten, president and senior fellow at TPI and your humble podcast host. Today, I'm delighted to have Mike Rosenbaum with us. Mike is an economist, lawyer, and entrepreneur. He worked at the State Department and the White House during President Clinton's term, which is where he and I met. And after clerking for the honorable Diana Gribbon Motz of the U.S. Court of Appeals for the Fourth Circuit, he founded his first company Catalyte, focusing on finding, training, and advancing technology talent in 2000. Fourteen years later, he founded Arena Analytics, which aims to help organizations diversify, vitalize, and stabilize their workforce. And more recently, Mike was a candidate for Maryland Governor. What's particularly interesting for our audience about these companies is that they use data, AI, and machine learning to avoid biases built into more traditional workforce development tools. Mike, thanks for joining us.

Mike (00:53):

Thanks so much for having me.

Scott (00:54):

Give us an overview of your companies and what they do.

Mike (00:57):

Scott, thanks again for having me. And I'm so excited to be on a podcast with you because some of the initial germs of my first company actually came from work that you and I did together at the Council of Economic Advisors in the nineties. But the background behind my first company Catalyte is, and I'll give you a little bit of background on me and sort of personally how I ended up in this place and then sort of talk to you a little bit about the company themselves, but… So I grew up in Bethesda and my father is a lawyer and my mother was a teacher and yet my grandfather had left Germany as Hitler was coming to power. And as a result, when I was growing up, the basic message in my house was, we’re not safe, because this could happen here. And that influenced me significantly and also made me think that obviously part of my goal as an adult was going to be to make the world safe and to make myself safe because I didn't feel safe as a kid.

Mike (01:48):

Earlier in my career, doing a little bit of work in Russia, around issues of macroeconomic transition and, and how you transition economies that everyone has sort of a pathway to dignity, then realized that some of the same issues that I was dealing with in Russia had threads of those same issues in the U.S. And specifically in the nineties, the threads of those issues in the U.S. came from the fact that for a lot of people in the U.S. in the postwar era, there had been pathways to dignity to some basic level of security, without a college degree through manufacturing and industry. And in the nineties, those industries had declined massively. And so the question was, sort of, okay, for someone who might have been able to find a pathway to dignity in the nineties, or maybe more significantly in the seventies or the eighties, because there were all these manufacturing and industrial jobs, you know, what was the next thing? And the problem was, because industries that were growing like technology and healthcare, created limitations for folks based on class and race and gender frequently, for moving into those industries in some sort of high, significant level, there were structural problems that prevented that from happening.

Scott (02:51):

Let me interrupt you for a second, because in the nineties, that was a pretty optimistic time. I mean, there was the dot-com bubble and we thought we were, you know, venture capitalists would pay for everything we did forever. And was this aspect that you're talking about really on people's radar, or did you see something that really wasn't at top of mind?

Mike (03:09):

It was an optimistic time for classic societal elites.

 

Scott:

Uh huh. Right.

 

Mike:

So it was an optimistic time if you went to a fancy college. It was an optimistic time if you had the credentialing and the networks to be able to access, sort of, economic, political, and social elite ecosystems in the U.S. However, if your parents were working at GM or Bethlehem Steel and had pathways to decent paying jobs because of organized labor and organized labor’s relationship with manufacturing and industry, if that was the path that you had seen in your parents' generation, and that was the path you were going to take, it was not an optimistic time.

 

Scott:

Mhm. <affirmative>

 

Mike:

Because those jobs were going away. And so when you and I met, I was doing some work in and around the Clinton administration originally around tech policy and then around labor issues. And after we worked together, we-- at the Council of Economic Advisors office-- I did some work for the Vice President’s office. And the Vice President had the portfolio that included the Empowerment Zone program and policies around it. And the Empowerment Zone program specifically was built around a set of ideas that underinvested communities were untapped retail and distribution markets. And they were untapped retail and distribution markets because they were located proximate to [?] districts. And so the empowerment zone and related policies were built to incentivize retail and distribution companies to expand in underinvested communities and I disagree with those policies.

 

Scott:

Mhm. <affirmative>

Mike (04:30):

It might be true that an underinvested community is an untapped retail market, but more significantly it's an untapped talent market. It’s an untapped talent market because the market for talent based on resumes, resumes correlate with race and gender and class, but they're not necessarily great predictors of success in a job. And, my field, as you know, some of our fields, was empirical economics. That was sort of the academic field that I was sort of on the path to becoming a professor in. And I said, you know, if we could apply data to this problem, such that we could not rely on a resume and find a path for someone who had all of the raw material to be an awesome X, but didn't have the opportunity to get that job because of structural limitations in the economy, and we could convince the employer of X to hire that person, then everyone would be better off. Folks from underinvested communities who were subject to this bias would be better off and employers who need better talent would be better off.

Mike (05:27):

And so that was my pitch as an alternative to the Empowerment Zone program and I lost that argument. And so I thought I was going to go back and become an academic, and my academic advisor-- I was a fellow who was going to help me get the assistant professor job said, “I don't think you want to be an academic. How you think, you want to be an entrepreneur.” And she was right. And so I moved to Baltimore because GM and Bethlehem Steel are major employers in Baltimore. And their employment was down by like 95%, and as a result, all the stuff that we all know happens was happening. And I started what was originally a nonprofit organization that used what you would describe today as very simple machine learning to identify the likelihood that someone would be in the top 2% of all software engineers after receiving training based on a series of metrics of software engineering outcomes.

Scott (06:13):

What was the starting point, I guess? I mean, yeah. Well how did you know what data to use and where it would come from?

Mike (06:18):

I didn’t. So in the early days—and for what it’s worth, I quickly shut down the non-profit and sort of, for-profit company doing the same thing for reasons I’m happy to talk about. But that was sort of the core entity. Originally what we did was we tried to use academic research on learning styles and psychology and tie it to some of the early thinking about software engineering productivity. Quickly realized that some of those ideas were not mature enough yet to really do this. What we actually ended up doing was hiring folks today we would describe as data scientists, but at the time you described as applied math folks.

Scott (06:55):

I like that term better.

Mike (06:56):

Right? Applied math background folks, who were, who may have been biologists or may have been sort of pure applied math folks. And we basically got them to experiment. We said, figure out if you can find data that seems to correlate with an outcome. And at the same time, let's look at, sort of the other metrics that correlate with race, class, and gender. So race and gender obviously are one bucket. Class, we used college degrees to correlate with class. And what we quickly found was that we could find no statistically significant correlation between a four year college degree and performance as a software engineer. And so we kept experimenting and we over several years experimented a whole bunch of ways. We had a theory that hand eye coordination would correlate with success as a software engineer. And we would have people assemble tinker toys and then collect data points related to their assembly of tinker toys.

Mike (07:44):

Like the way they approached it, how much time they took, what they started, how long it took them to change something, things like that. To see if we could correlate that with success as a software engineer. There was a piece of research that had come out of Hopkins medical school, that rapid eye movements correlated with the ability to defer gratification. So we tried to use cameras to measure rapid eye movements and see if we could correlate that. The problem was this was a long time ago, the cameras didn't have high enough resolution to pick up on the rapid eye movements. So we could never really get good data out of it. Right. But we sort of kept iterating, finding a theory, applying it, finding another theory, applying it. Occasionally we would do unsupervised research. So see if we could just sort of find datasets where we could pull correlations. So we would try to reduce our own biases that were introduced when we use a hypothesis. And then we correlate it with outcome metrics. So with a company like Catalyte, which, what Catalyte does is Catalyte identifies someone who will be in the top 2% of all software engineers provides training and moves someone into a software engineering job. And over time that expanded beyond software engineering to other tech roles.

Mike (08:55):

So things like, how quickly does someone get promoted in a job? What’s the retention of that job? If they are charging an enterprise for someone’s time, how long do they keep someone? Things like that were sort of our early metrics that we would correlate with. And over time we would collect more sophisticated metrics, like what were the error rates that folks would introduce? What was the rate of ramp when someone was faced with a new piece of work? So we were able to use more and more sophisticated metrics, but over time we sort of got more and more sophisticated models and obviously the technology improved as well, that allowed us to make predictions with increasing levels of accuracy.

Scott (09:21):

So were your model's different from others on the input side or also on the output side? You looked at their actual performance in jobs and often in government programs, we aren't clear about the outcome. Sometimes in the worst case, people think spending the money is the outcome, when at best that's an input. But I would imagine that firms doing training would, you know, have an incentive to look for the right outcome measure. And I don’t know about, although there are many labor training programs and some of them must have had good measures, but were you innovating on both sides of the equation there?

Mike (09:50):

What we found was that helping enterprise understand what outcomes they were looking to optimize for became part of the work. Particularly in the earlier years. So the way we relate to each other as human beings is rife with biases. And when enterprise is made up of people. And when leaders of enterprises think about is in their workforce, we all get there's a natural instinct for people to look for people like themselves. And when you say, what are you looking for in an employee? You end up introducing a bunch of bias, if you aren't careful about what metrics you're optimizing for. So for software engineering, particularly for software engineering that relates to building a product. So when Catalyte first started, a lot of tech was about keeping the lights on or automating an internal business process. Increasingly tech inside of large enterprises move towards being core to the business: building a product, understanding the customer, embedding something in the product that would allow the product to work better. And that's a very different kind of technology development, but still typically there's sort of a set of outcomes that someone wants.

Mike (11:00):

Someone wants a product that people will buy. Someone wants that product delivered without errors. Someone wants that product delivered quickly. And so you can tie metrics to those that allow you to kind of back into who you're looking for. So for Catlyte, Catalyte grew in the early days primarily by helping venture-backed technology companies build out their workforces -- that shifted in 2011 because Catalyte got a deal with Nike and Nike was building one of the early wearable tech products called the FuelBand. And it ended up moving most of the technology work, software work on that product to this platform. And so for that, Nike started working with Catalyte in 2011 and in 2013, released a large of data related to using this, the Catalyte platform to hiring staff or to sending work overseas using sort of other models and found that this model generated half the error rates and basically delivered products at three times the speed. And that was a really important set of metrics because for them, those were important metrics to optimize for and their ability to identify those metrics and be able to say in an objective way, “this model for building out our workforce is working” was really critically important for Catalyte’s ability to talk about productivity and workforce in the world, but also hopefully for Nike.

Scott (12:25):

So you're basically talking about Moneyball for labor.

Mike (12:27):

That's exactly what made Catalyte take off. So in the early days, we would hand out the book Moneyball as part of the sales process to explain what we were doing. But the problem is the only people who had read the book were people really into baseball people and people in the finance industry. And then around 2011, a movie with Brad Pitt came out and everyone saw it and it became shorthand for Catalyte in its sales process to explain why it is that what Catalyte was doing was so productive. And so attractive.

Scott (12:58):

So Brad Pitt has had some positive effects on the world.

Mike (13:01):

Brad Pitt has had some positive effects in the world. And in fact, the CIO of Sony Pictures many years ago, who was so excited about this, gave us a copy of the original Moneyball poster.

 

Scott:

Wow.

 

Mike:

To put up in the office. Because he was so thrilled by this. We were doing some work for them.

Scott (13:21):

That's nice. And so you started with computer science coders, I guess. How many occupations have you been able to apply this to?

Mike (13:32):

So Catalyte today operates in sort of all the related kind of tech functions. And so how many actual jobs that is, is probably a much more complicated and gray question, but things like folks who do digital ad work, isn’t exactly software engineer, but related to it, quality assurance folks sort of so Catalyte applies these ideas to all of those related industries. My second company Arena, which you mentioned at the beginning, operates in healthcare providers. And does work across almost all of the jobs you could imagine in a hospital system or a skilled nursing operator or an assisted living operator or CCRC, including both clinical and administrative and managerial.

Scott (14:16):

Oh, that's interesting. I don’t know whether we should go into or not, but did you start that as a separate company because of specific rules related to healthcare, but it's just hard to be in healthcare industry?

Mike (14:26):

No… So Catalyte was really set up as a vertically integrated model to identify, provide training, and move folks into a job. Arena was actually a project inside of Catalyte, that I spun out a separate company at the end of 2014, that was really designed explicitly to reduce bias based on race, class, and gender in healthcare and healthcare providers and healthcare provider workforces. And because it was doing a very specific thing across very large numbers of people, it was just a different model than Catalyte, which is vertical integrated and then routing folks into a relatively specifically set of jobs. And so the Catalyte operates today on a thousands of people kinds of level on an annual basis. Arena operates on the millions of people level. And so it's a different scale, but what Arena does is essentially change how healthcare providers make hiring and promotion decisions.

Scott (15:23):

And you've seen different outcomes as a result?

Mike (15:25):

Massively different. So what Arena does, Arena is deployed into healthcare providers and it’s also deployed into places like quick service restaurants. So someone might apply to a Taco Bell, and be a really great manager in the maternity ward at the hospital down the street, but doesn’t look like it on paper. And so that person is self-censored, and said, “Well everyone I know works at a Taco Bell, I don’t know anyone who's a manager at a maternity ward, so I would never apply for the job.” And the executive responsible for hiring that manager at the maternity ward says, “I would never hire someone whose resume says Taco Bell.” And so, what Arena does is route the individual from one place to another place and then build trust in the hiring manager to make hiring decisions in a different way. And the outcome result of that is much more diverse and inclusive workforces, which I'm almost reluctant to use those words because they're so overused, but I'm happy to quantify that if that's helpful, but--

Scott (16:18):

I love quantification, so… Apparently, I also like to turn verbs into nouns, but yes, if you quantify it, that would be great.

Mike (16:27):

So Arena effectively reduces bias in ex ante hiring and promotion decision making. So, the decisions that are otherwise made as to who's going to get hired or who's going to get promoted into a particular role by between 91% and 99% based on race and gender. Class is a little bit of a mushier one because you're looking for correlators with that, something like a college degree. But for race and gender, you have better data because large enterprises have to report EEOC or OFCCP data. So you have access to that data to be able to measure the impact of it. So massively, massively reduces the bias in that decision making and increases-- when I say inclusion, what I'm really talking about is, who's getting promoted? Who's in higher level jobs inside of a healthcare provider?

Scott (17:13):

And that's actually a good segue to some policy questions. I mean, you know, that there's a huge debate about algorithmic bias-- not always well defined, but basically the, the concern that different algorithms that platforms use end up reinforcing biases. And we know that that can be true, but you're using the same tools to opposite effect. Do you have thoughts on this, on the whole debate and what, if anything, policymakers should be doing? What's the right way to think about all this?

Mike (17:40):

Totally. I mean, I think obviously, you and your audience are ground zero for certainly national, federal thinking on this topic. And I think the natural instinct is to say AI has a substantial risk of institutionalizing bias that human beings have, and therefore we should shy away from it. I would suspect that your audience is sophisticated enough to realize that that may be an overly simplistic way of thinking about it. And so the question is, sort of, where does the risk come from and how do we regulate it? And they're obviously sort of long thinking about things like explainability and AI, which is sort of a tool that folks use, but frequently folks use process. Sort of regulating process as a way of dealing with the risk that we all recognize in AI done poorly / in a way that institutionalizes bias. My own point of view on this is the problem with process regulation is that you'll never get all of it.

Mike (18:36):

And you'll always be a few steps behind technological innovation. Like regulators will never be able to stay in front of it. You know, we as a society and the conversations we need to have as a country and society about how we want to think about issues of privacy, how we want to think about issues of the future of these technologies just can't move fast enough to be able to use process purely as the tool for this. And so, I personally am a fan of managing, sort of, what you're trying to manage; regulate what you're trying to regulate. So what we're actually afraid of is institutionalizing bias. In the world that I operate in, regulations coming out of the EEOC and OFCCP provide regulatory framework for this topic. Because its biased, as applied to hiring personnel related decisions, decisions related to someone's economic opportunity and job prospects. And there are ways of managing that. So you could say no technology is allowed that influences how a hiring decision gets made. So that would be sort of one end of an extreme spectrum. The problem is that today, if you look at how hiring decisions get made without technology, we all get there massively ripe with bias. Massively ripe with bias. And so by saying, none of its allowed, you prevent the innovation that actually could deal with sort of the problem that we all recognize comes from sort of evolutionary tribalist instincts.

Scott (19:55):

Right. I mean, well, that seems to be a kind of a big problem with the debate overall, that there's not the-- what is the comparison? The comparison is without algorithms. We know that people are just horribly biased.

Mike (20:05):

Horribly biased.

Scott (20:07):

We have all of human history to, to show that.

Mike (20:10):

Totally, totally, totally. So the question is, what do you do about it? And so, my companies use a method that we actually essentially repurposed from the intelligence community. So we're headquartered in Baltimore, which gives us proximity to Fort Meade, which allows us to have a pool of talent that is extremely sophisticated on these questions. And so there is a methods that were developed by the intelligence community in terms of deep fakes that are essentially predicting with a constraint. And so you essentially teach a group of algorithms, the colloquial way to say it is you're applying AI to AI, but the more specific way of saying it is, “I want to build a neural network or a model that is going to be able to distinguish between things, subject to the constraint that once that distinguishing has happened, that I cannot tell based on race, class, and gender, which bucket someone fits in.”

Mike (21:01):

So what that means is that, is that if you use these methods, you should be iterating. I mean, sort, kind of get to a Nash equilibrium of the way that everyone who saw A Beautiful Mind would think about. If you get to a Nash equilibrium that now says, “I now have a prediction subject to the constraint, that race, class, and gender are not distinguishable in these distinctions” and what that means. That's really just a method that was being used by the intelligence community to turn deep fakes. And that's what allows you to reduce that bias. Now there's enough data flowing through something like that. A human being couldn’t do that, but a technology can do that. And, and we always say to what, enterprise, hold Arena or Catalyte-- hold us accountable to those outcomes. We have the information, you know, we're saying we can do it, hold us accountable to those outcomes.

Scott (21:48):

You mean showing to a lack of bias based on some metrics.

Mike (22:00):

You’re never going to eliminate bias. Right. You’re never going to eliminate it 100%. But you’re enough better than to your point, the baseline. You have not institutionalized the bias that was already there. That’s a way of regulating in a way that doesn’t make us vulnerable to all of the weaknesses of regulating process. Because the reality is that explainability-- we could talk about explainability in like, four more podcasts-- but explainability as a regulatory tool has some limitations in part because neural networks are hard to explain… There's like a lot of stuff in that. If you regulate the outcome, then you’re able to create those limitations a little bit more directly.

Scott (22:26):

This is not a fair question at all, but is it possible to apply that to debates about bias in what your Twitter or Facebook feed shows you?

Mike (22:37):

Totally, right. I mean like, you think about what’s possible from a technological perspective to manage content questions. There are a wide range of possibilities there. The question becomes, what as a society do we want? And obviously, your audience is much more sophisticated on this topic than I am. But-- and you're more sophisticated on this topic than I am, but-- this question of social media platforms, consent, and privacy is a core one. I mean, I remember a thousand years ago, Catalyte was using-- and when I say a thousand years ago, I'm talking about like before 2012. Before 2012, Catalyte had experimented using social media they had. So have someone log in with a Facebook account. And we realized that with natural language processing with private messages folks were sending on Facebook—that folks thought were private but weren't actually because they consent, because they didn't understand what the consent was. That you could tell all kinds of things about someone. And I won't go into it here, but you could imagine what you could tell if you could do natural language processing on lots and lots and lots of private messages on Facebook a dozen years ago. So we experimented with that, found we were able to find signal, but decided to stop in 2011. We sort of stopped the experimentation in 2011 and we stopped the experimentation because we thought that folks did not realize what they were consenting to. Folks did not realize that when they logged in with their Facebook account and they said, “do you give access to your account to this website?” that they had basically generated this massive data dump and had given us all this access. And this question of consent, and folks realize what they're making public is an important one and more sophisticated places in society today. But I'm not sure that we're having informed public discussions on this topic that allow us to make democratic decisions about how we want to regulate it.

Scott (24:18):

I mean that comment applies to lots of issues.

Mike (24:21):

Lots and lots.

Scott (24:21):

Unfortunately.

Mike (24:23):

Lots of issues. Right? Totally.

Scott (24:23):

We're running low on time, but I wanted to ask before we finished a little bit about your run for governor. So first of all, I'm giving you the full accounting of the hundred bucks I gave you.

Mike (24:46):

Thank you very much for the hundred dollars. <Laughs> I am extremely grateful for it. I got that little ad on that social media platform.

Scott (24:46):

That’s exactly right, it worked.

 

Mike:

<Laughs> Right. So thank you for it.

 

Scott:

What did you learn in that experience, both about your business, what you want to do in the future? The things that you think are important and what direction you might want to go.

Mike (24:56):

So I believe, and I would suspect that some of your audience, I would suspect agrees with me, that there are moments in history where there are massive transformations going on. So in the 1930s, for example, the idea of the New Deal related to essentially how to react to the industrial revolution and macroeconomic reactions to that. In the 1960s, we dealt with sort of the civil rights movement, and also the changes, and the idea of what the role of the public sector should be, the creation of Medicare and social security. And today we're going through a similar transition. We're going through a transition where a lot of the ways we think about our public sector were built in the postwar era when a lot of the country worked in manufacturing and industry, stayed at a single employer for a long time, had a very traditional career trajectory. And that isn't how the economy is put together anymore. Specifically for the Technology Policy Institute…

Mike (25:50):

I mean the idea of the innovations, the technology represents and the opportunities of that we haven't fully grappled with in terms of the social compact, in terms of the deal we all make to live with each other peacefully. And obviously those of us who are fortunate enough to be at the wave of this have benefited a lot and most people haven’t. And so obviously that's sort of what Catalyte and Arena are about. I had come to the conclusion that the public sector hadn't grappled with this question and that at a state level, a state level actually might be the right place to really grapple with these questions and come up with solutions, particularly a place like Maryland, where you have Bethesda proximate to a place like Baltimore city, where I live. And that represents the challenge. Bethesda's proximity to Baltimore city represents the challenge. And so I decided to run on the premise that I was going to advocate for a change in how we think about the role of the public sector, how we think about the role of the state and local government. And I continue to think that. I think that my point of view on this, I realized in a campaign, even though I think a lot of people share that point of view, it's a different enough point of view that I think it was going to take a longer period of time than a campaign would allow.

Scott (27:04):

Well let’s spend a little bit on this view of the state's role. Are you talking about it should take more of an important role in kind of labor-like issues, a more important role on the national stage in all kinds of things?

 

Mike:

Yeah.

 

Scott:

I grew up in North Carolina and when I hear, I mean, you didn't use the word states’ rights and I've kind of would be hard to imagine you saying, saying that, but that kind of, you know, it evokes the wrong, bad things. Right?

 

Mike:

Yep.

 

Scott:

And so, what do you view as the new role of the state?

Mike (27:27):

So today -- late last year, in Maryland… I’ll use numbers. Late last year, in Maryland, Maryland had one of the lower job creation rates of any state in the country, but also in the region. At the same, there were 100,000 open jobs in Maryland that were easily measurable as in like a hundred thousand job postings, which is how we measured it for jobs in four industries, all of which provided a pathway to a job that paid at least 65,000 dollars a year without a college degree: healthcare, tech, skilled trades, manufacturing. And yet over here, you had a relatively high unemployment rate and relatively low levels of job creation. And over here you have a bunch of open jobs. So the question is, what do you do about it and why don't folks make that transition? And the way we've historically thought about it has been sort the classically postwar way, which is, we're going to make it less expensive to go to school. We're going to encourage people to go college.

Mike (28:15):

We're going to make it less expensive. As we slowly realize that maybe college isn't the right answer all the time, we're going to make community college less expensive. But what about my cost of living? The reality is that if I am working in a minimum wage job and I have a kid, it's almost impossible for me to go to school, even if it's free, because I need a way to eat and to breathe and take care of my kid. So when I ran for governor, I suggested that we pay folks 15 dollars an hour to learn full time, but we pay folks $15 an hour. We pay a hundred percent of childcare. We pay 100% of transportation costs and give folks a $2000 forgivable loan. And we pay all educational costs. So like that sounds like an extravagantly expensive to do. Extravagantly expensive thing to do. So like, so what would it cost in Maryland, which is over 6 million people, what would it cost to do that in Maryland for 150,000 people a year?

Mike (29:05):

Like what would the math be? And so you could imagine if you do that, then folks make progressively higher wages and that generates tax revenue. But the bigger economic value actually is in the fact that a state like Maryland has a state budget of like $50 billion a year. Not quite, roughly, because capital, and we know its complicated. But roughly $50 billion a year. Depending on how you define it, somewhere between 12 and 15 billion of that is healthcare. Healthcare almost entirely for folks who are severely poor. And healthcare experts, and I'm sure there are some in your audience cause there's overlap between technology and healthcare expertise. Folks understand that social context, economic context is a bigger determinant of healthiness than healthcare providing. So the question is what happens if folks economic support, air conditioning, stable housing, and a job. And the answer is that essentially you save a lot of money in healthcare if each individual makes more money. And those numbers are so large that for a state like Maryland, the numbers work out that you could make a billion dollar upfront investment in something like this, where you pay a net new hundred fifty thousand people every year to learn. You could make a billion-dollar upfront investment. And in a state like Maryland, within five or six years, you would generate two to three billion dollars a year in net free cash flow at the state level through a combination of increased state taxes and folks are making more money and therefore their healthcare implications.

Scott (30:27):

So your point is that something like that this is possible at the state level, but not necessarily at the federal level?

Mike (30:38):

So, I think that it is possible at the federal level, but probably harder to accomplish at the federal level because of, sort of, the nature of congressional politics.

Scott (30:41):

So the states as incubators of innovation.

Mike (30:41):

Exactly. Now there's some systemic reasons why I think those are challenging things to do. Like, why is it that we don't have more leaders at a local, state, or federal level who are willing to push the envelope in a transformative way to change how this works? And the answer is, the systems themselves create risk aversion. When your basic incentive is to get reelected or get the next job, and those timeframes are relatively short, then you essentially need someone who's willing to lose reelection to accomplish it. And when the nature of the political system attracts folks whose ambition for decades has been to be in an elected office or who currently in an elected office and want a long career of it, it creates a set of incentives that make it more difficult to really do the difficult work, the heavy lifting politically to make this work.

Scott (31:28):

Well, that sounds a little bit like a prediction of what you might do in the future.

Mike (31:32):

<Laugh> perhaps. So I think there's an opportunity to adjust some of the underpinnings of that system. So for example, my own point of view on this is that organized labor, which is important in all elected politics, but particularly in democratic primaries, that organized labor has an opportunity to transform itself particularly for certain pieces of organized labor into the role that allows organized labor to grow rather than shrink. And specifically, that when people stay in a job for like two years, there needs to be a vehicle for lifelong learning and career trajectories that doesn't have a natural place elsewhere in the economy and organized labor can play that role. And if organized labor starts playing that role, then there are ways of changing how elected officials make political calculations.

Scott (32:19):

I think we should probably wrap it up with that. But Mike, thank you so much for talking with us today. It's always great chatting. Always learn something new.

Mike (32:27):

Thank you so much Scott. This was so fabulous, and we have known each other a very long time, through multiple generations of the world, so this was awesome. Thank you so much.