Pondering AI

Simon Johnson takes on techno-optimism, the link between technology and human well-being, the law of intended consequences, the modern union remit and political will.

In this sobering tour through time, Simon proves that widespread human flourishing is not intrinsic to tech innovation. He challenges the ‘productivity bandwagon’ (an economic maxim so pervasive it did not have a name) and shows that productivity and market polarization often go hand-in-hand. Simon also views big tech’s persuasive powers through the lens of OpenAI’s board debacle.

Kimberly and Simon discuss the heyday of shared worker value, the commercial logic of automation and augmenting human work with technology. Simon highlights stakeholder capitalism’s current view of labor as a cost rather than people as a resource. He underscores the need for active attention to task creation, strong labor movements and participatory political action (shouting and all). Simon believes that shared prosperity is possible. Make no mistake, however, achieving it requires wisdom and hard work.

Simon Johnson is the Head of the Economics and Management group at MIT’s Sloan School of Management. Simon co-authored the stellar book “Power and Progress: Our 1,000 Year Struggle Over Technology and Prosperity with Daren Acemoglu.

A transcript of this episode is here.

Creators & Guests

Kimberly Nevala
Strategic advisor at SAS
Simon Johnson

What is Pondering AI?

How is the use of artificial intelligence (AI) shaping our human experience?

Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.

All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.

KIMBERLY NEVALA: Welcome to Pondering AI. Thank you for joining us as we continue to ponder the realities of AI, for better and worse, with the diverse group of innovators, advocates, and data professionals.

In this episode, I'm pleased to bring you Simon Johnson. Simon is the head of the Economics and Management Group at MIT's Sloan School of Management. In addition to a studied academic career, he has worked in advisory and policy roles, including a stint as the Chief Economist and Director of Research at the IMF.

Simon is also the author of the recent book, Power and Progress-- Our Thousand-Year Struggle Over Technology and Prosperity. Today he joins us to discuss AI in the context of the future of work and the prospects of shared prosperity. Welcome to the show, Simon.

SIMON JOHNSON: Nice to be with you.

KIMBERLY NEVALA: For those who aren't familiar with you, can we start with a quick recap of your professional journey through to your current academic interests. And why was it important, or what was the spark, for writing Power and Progress now?

SIMON JOHNSON: I'm an economist, and I've worked on economic growth and economic crises and recoveries for more than 30 years. I've worked with my good friend, Daron Acemoglu, on why some countries become rich and stay rich and other countries really, really, really struggle. It has seemed to us over the past decade or so that some of those rich countries, like the United States - rich on average - have increasing problems of inequality. That some of the things that used to go really well in terms of how we developed and deployed technology and used that to help people, some of those things have fallen away. Not recently, also, over some period of decades. We wanted to engage with those topics at the same time as we realized - because we were talking to lots of people - that something around AI was coming, and something big.

So, if you like, ultimately the book is the thousand-year back story for artificial intelligence: what it is, where it came from, but most importantly, how it may affect us, how it could go well, but why it may also go really badly for many people.

KIMBERLY NEVALA: A fascinating read. One of the items that jumps out, which I started to reflect on after reading the book, was whether we're over indexing on the impact of technology itself as the sole or primary determinant of progress and prosperity. As you think about the role of technology broadly, how do you contextualize where it sits and its position as a lever to progress and prosperity?

SIMON JOHNSON: Well, obviously, over the past 250 years or so, human societies have figured out things about technology that had eluded them for the previous 2,000 or 20,000 years.

So there has been a liberation of potential, which is very much about invention. Steam power. Oh, we can use coal in this way. Wow. Now we can run trains. Aviation. These are all really big ideas that humans had speculated about for a long, long time. And there had been little experiments in this foreshadowing. But really, we've had these massive, massive breakthroughs.

However, at the same time, we have discovered, rather brutally, that while these things we can invent are amazing and positive and can raise productivity, we can have higher incomes and better lives, we can also do incredibly evil things to ourselves, such as massive wars.

And I think World War I, Kimberly, these four years of destruction -- machine guns, military aircraft -- all of that came together. Afterwards, there was this collective ‘Oh, my God what have we done’ feeling. So the 1920s and 1930s were incredibly unsettled. There was a lot of soul-searching. And then, of course, World War II comes.

So we had, at the beginning of the 20th century, this horrible awakening that technology has a super dark side but also some positives. So finding that way to those positives: we did get a lot of the positives in the '50s, '60s, '70s, maybe into the 80s. And somehow, they've drifted away from us again. Getting back onto a better track is what we're about and what our book is about.

KIMBERLY NEVALA: One of the things that's interesting in the current narrative is discussion of why AI is fundamentally different than pivotal tech that has come before. Much of that discussion tends to focus on the fact that the current instantiation of AI aims to address or automate cognitive versus physical work. That somehow this fundamentally then changes the game. Therefore, there's not necessarily a lot we can learn from history. I'm wondering, from your perspective and all of the research that you've done, do you see AI as fundamentally different than the tech that's come before? Why or why not?

SIMON JOHNSON: No. we see it as substantially similar, in the sense that as you said, you're replacing people with machines or algorithms. Specifically, what you're doing is you're taking over some tasks that were previously done by people. And the tasks that are suitable have some routine element.

So obviously, earlier machines - if you're banging a piece of metal in the same way over and over again - that's a routine task. We can have a machine do that with more accuracy or have a human operator for the machine. So that combination could work out quite well.

Now, we are certainly talking about cognitive space. We're talking about some of the tasks that cognitive people did. And, of course, the somewhat unpleasant realization for those of us who work in this space, a lot of the things we do that we think are quite original are actually rather routine. I think I've had three original thoughts this morning. I probably haven't. I've probably just done routine things.
So to recognize that about ourselves is unsettling. Recognizing that there are routine elements in highly creative tasks like writing movie scripts. That's very creative, but there's also a lot of routine elements in there, like editing and fact-checking or whatever. I think that's unsettling.

But the basic issue is: when and how do we replace people with machines? Or the alternative path that we recommend, is to find ways to use machines to enhance what humans can do.

Then the interesting question is, is this just for people with a PhD or an advanced degree from a fancy university? Or are there ways you could use these new algorithms to enhance the capabilities of people who don't have a lot of formal education, but who have important things to do. Either for money or not for money. Is there a way to develop technology that better augments those people and their abilities?

KIMBERLY NEVALA: One thing that this brought up for me was the question of whether we're falling into a trap of selectively heralding the beneficial outcomes of technology - technology is always good - without acknowledging the work that's required to achieve it and perhaps, as you just referred to, some of the very brutal ramifications that have been inflicted in the past on the path to get there. Is that a fair question or observation, concern?

SIMON JOHNSON: Yes. I think you put that very well. I think that's exactly what we do. And, of course, this is not the first society, it's not the first moment when people have done this. It was also done in the early 1800s.

So the British industrial revolution was underway. There was a lot of mechanization of cotton spinning and weaving, coal mining. Metalworking also got pulled into that. We know that that revolution really started in the 1780s with some pretty big transformations and productivity in some sectors. But we also know that by the 1840s, or in the 1840s, there were still children in the north of England, where I'm from, whose entire life comprised of working deep underground 12 hours a day in the pitch black. Six days a week pushing a coal cart with their head. That was a job in the 1840s.

So that was a 60-year lag, Kimberly, between some really fundamental changes in technology and people, including small children, living really very badly. In the meantime, there were a lot of highly educated people who said, oh, you couldn't change this. You can't reform it. You can't have limits on hours. You can't limit child labor because that will disrupt the whole world or change the British model or ruin our shared prosperity.

I think we're at another moment like this. I mean, we know more now and we're obviously richer on average, a lot richer than we were back then. Now is a good time to say, well, hold on a minute. Is that really true that if we limit or apply technology in certain ways, or if we put guardrails in, that will disrupt the world? Or is that just going to be a better way to help deliver benefits to more people?

KIMBERLY NEVALA: There does seem to be an almost, at the risk of sounding very harsh, callous disregard for this question. We say, well, nobody would go back. We wouldn't want to go back today to the 1800s, or to the 1700s, or even to the early 1900s. So on par, yes. It's going to be rough here for a while. And it does hopefully make us question a little bit, ‘does it have to be that way’? Is this an absolute requirement? But I believe your answer would be, no, it doesn't need to be that.

SIMON JOHNSON: Absolutely. Absolutely doesn't. Nothing has to be this way or that way, Kimberly. These are all choices. These are social choices. Now, we might delegate those social choices to this or that billionaire. Wouldn't recommend that, but we do that sometimes. But people, humans are making choices whichever way we go.

KIMBERLY NEVALA: There were many concepts in the book I pulled out. One was the power of persuasion. We talk a lot about power dynamics, that's a big part of the conversation today. Particularly for those who are pushing from the outside. Those who are part of probably what you call the countervailing forces. So talk a little bit about the power of persuasion and the elements of persuasion. Which, in some cases, is coercive, but not always. What are the implications of the power concentration today in a few very big tech companies?

SIMON JOHNSON: Well, what happened recently with regard to OpenAI is very interesting and quite instructive. Let me also add the caveat that we don't know exactly what was said by whom and when and so on. That will come out over time.

What appears to have happened is that the board of OpenAI said that Sam Altman was not doing what they wanted him to do. And he then left that group. But he immediately established that he was going to continue with this line of work. He would get the full support of Microsoft and most of his former employees indicated they would move over with him. Consequently, the board of OpenAI had to take him back in, and most of them resigned.

So I think what you got there, Kimberly, was a very nice example of how persuasion works. Because he didn't have to make long speeches. I don't think he wrote any big op-eds. He did some tweeting or x-ing; whatever we call it now.

And he basically conveyed, quite effectively, that if anyone, like consumers, Microsoft, or the US government, wanted progress with this technology - which implicitly was understood to be fundamental to, I don't know, profits of Microsoft, the national security of United States, whatever else you could imagine - that you needed Sam Altman to be in a position of authority. With a lot of money to do whatever is needed in terms of model training and deployment and with his previous people supporting him.

I mean, that was an amazing 24 or 48 hours of some of the most effective persuasion that we've ever seen done in an extremely modern way. And no coercion. No coercion at all there. So that's a perfect example of how we are beguiled by technology and by technologists. And, consequently, hand them the authority to do whatever it is that they want with very little or almost nothing by way of safeguards.

KIMBERLY NEVALA: So let's go back to what we can do about that. Circling back to some of the predominant framings, one of the things that is part of the common narrative, as we've been alluding to, is the idea that technology, as it improves, will by definition will make things better. If we make things more productive, if we make people more efficient, et cetera, et cetera, we will all eventually - maybe by fiat - benefit.

In the book you talk about the productivity bandwagon, which is, I think, your term for the core of this thesis or framing. Can you tell us what the productivity bandwagon is, how it's bandied about today? In your research you found it it doesn't stand up to scrutiny.

SIMON JOHNSON: Yeah. So the productivity bandwagon is an idea that we contend is so pervasive in economics it doesn't even have a name, so we had to invent the name. It's very simply the idea that technology improves. That increases productivity of labor. That boosts the demand for labor. That raises wages and improves living standards and so on and so forth.

And you're right. We do spend a lot of the book going through the history of technology and looking when that does apply because it does sometimes. But, also, in many cases when it didn't apply, hasn't applied.

One of the most spectacular would be what I mentioned a few moments ago, the early British industrial revolution. Productivity increased for the workers who had a job. For a lot of other workers, it was much harder to make a living. Demand for labor outcome was pretty mixed. The standard of living for most British workers in the 1840s was not significantly higher than it had been in the 1780s. That was an instance where productivity did not deliver any kind of bandwagon onto which people could hop. On the contrary, the benefits of that early industrial revolution fell into the hands of just a very few people.

KIMBERLY NEVALA: Another takeaway, rightly or wrongly, from the book is that, in fact, we need to take a much more active stance if we want to not fall down this rabbit hole yet again. And part of that stance needs to be ensuring that as technology develops, that everybody at all levels benefits from the increased economic gains that it creates.

There's a bit of almost mysticism out there about, hey, we shouldn't worry about this. Humans have always found things to do in the past. It's OK if we automate away some of these components because that will just free everyone up to do more things that they enjoy and that they like and whatnot. Can you talk a little bit about the underpinnings of this automation bias, if you will, and why that is potentially harmful?

SIMON JOHNSON: Right. Absolutely. First of all, we don't think you can stop automation, Kimberly. Automation is very powerful. It has a strong commercial logic. And many companies are plowing ahead with it with regard to AI.

But the key in previous historical episodes has always been: do you create a lot of new tasks for humans to do? And are those tasks that require expertise and do people get paid accordingly? So, for example, when the US auto industry was transformed in the early 20th century (1900) they made about 2,400 cars a year total in the whole of the US. In 1929, GM and Ford made 1.5 million cars each. That car industry, vehicle industry at the end of the 1920s, employed about 400,000 people mostly in good jobs. Now, there was some union pressure, which also played out in the 1930s, to raise wages. But many of those jobs required expertise. You wanted people to stay in the job. You paid them a premium wage to do that. That's what we're looking for in this modern situation.

Unfortunately, what happened after 1980 with the digital transformation is not that. What happened was a lot of automation, a lot of replacement of blue-collar and some white-collar workers, and not mass unemployment. That's important. But there was a bifurcation or, as my colleague at MIT David Autor calls it very accurately, a polarization of the labor market.

You can see this definitely for men's earnings. It's also there for women. But what you see is people who went to college or have a post-graduate degree have rising earnings in real terms. Men, in particular, who did not go to college or did not complete college have flat or declining real earnings over a period of decades.

What that tells us is it is entirely possible in this economy to have productivity transformation, to have automation and to have further polarization of the job market. That's the post-1980 version. Or we could go back to something closer to the US auto industry and other industries 120 years ago. Which was shared prosperity and rising real wages across all education levels.

KIMBERLY NEVALA: Let's talk about the key factors that drove that shared prosperity.

If we look at some of the narratives around generative AI, for instance (and this is often true) where folks will say, hey, we can use these. They will make your less skilled workers more skilled. So someone who is not as good of a writer, doesn't have as much experience, these tools can give them a leg up.

The next part of that sentence, however, doesn't seem to be they are now operating at a level more akin to someone who was more experienced and more highly skilled. Therefore, we're going to pay them more. It seems to be; therefore, we can use less skilled workers to do more "skilled work.” And then not raise their wages but, in fact, lower them for everybody. It seems like a downward pressure.

Your book was interesting because there were a couple of examples of this doesn't necessarily have to happen. You were talking about GE, or even the longshoremen, which is very physical work embracing technology. But being given opportunities to do that in a way that was effective, efficient, and (affirmed the) old rising tide lifts all boats philosophy.

So talk to us about what that period where, in fact, yes, there was rising productivity but also shared, increased prosperity and decreased equality. What were the key underpinnings?

SIMON JOHNSON: Well, first of all, Kimberly, I would say that we had a conference - a three-day event at MIT- on the future of AI. My highly unofficial summary is that the academics and creative people said, this is great. We can do all these amazing things with new technology including what you just said. Which was upskill workers who don't have a lot of education so they can become more productive. The business finance people who were at this conference said, this is great. We can fire more workers. And that's the fundamental tension.

I think the answer to a specific question, ‘what happened when things went well?’ is a combination of three things.

First was more task creation. Those tasks have expertise. You pay people for their expertise because you don't want to lose them because hiring new people is expensive. So the sort of hardheaded economics piece.

The second was trade unions. You mentioned the longshoremen. The longshoreman's union at the moment that we've talked about in the book, the '50s and '60s, was rather proactive in embracing technology and insisting that their members got trained to use it. The United Auto Workers at GM in the 1950s did the same. So it increased its expertise through new task creation. It's trade unions.

And I think it was also - well, I know it was in the 19th century, early 20th century - political reform.
So a more participatory democracy. Certainly before 1832 in Britain, but also late 19th century in the US, a lot of power was in the hands of relatively few people who had a lot of money who could buy their way into politics one way or another. So opening up the political system and making it more amenable to pressure, to give us political rights. Don't allow the employers to oppress, that was really important in those early days.

Now we have to we have to ask the question. We have some new expertise being created. We'll see whether AI ends up replacing people completely. We don't have strong trade unions in the private sector. And our political system is a complete mess with the effects of technology, including social media, making it much harder for people to understand really what's going on to reach sensible decisions.

KIMBERLY NEVALA: And certainly, in some of the examples you underline and go through in the book, there was what I'm going to call a positive tension. Organizations and companies were not necessarily running unfettered and in front of the pack to ensure that workers had good options and that the existing workers that they had were being trained. But once there were some of these pressures, that cooperative did work.

But that took, my understanding is, active work. For folks to not just say, hey, we're sorry your job is being automated, but don't worry! In some unforeseen 10, 20, 50 years you're going to look back and we're all going to think this is good. Because yes, you don't have a job now. But a new job is going to come around. We don't know what that is yet, but just hang on. It'll be good.

Is that a fair sort of recap of part of what the problem is here as well? Which is we are not taking active responsibility, whether it's as companies or as public institutions, to ensure that as workers are being displaced, there is somewhere for them to go.

SIMON JOHNSON: Yes. That's a good way of framing it, Kimberly. The key word there is "active." So you can either be passive or active. You can say, well, the market will take care of it. Well, the market gives you the kind of outcome we've seen over the past 40 years. Which is a massive job market polarization. That's the most likely outcome.

Or we could be more active. Then the question is, ‘what’s the balance’? Is it government? Is it civil society? Is it companies? Who can be active? Who can create those jobs? Where do the good new jobs come from? How do you make sure that those opportunities are available to many, many people at all levels of education, not just to a few people who are highly educated?

KIMBERLY NEVALA: Tell us what are, in your mind, the key pillars of making a more humane transition. To change the course that we seem to be barreling down right now relative to AI and maybe even technology more broadly?

SIMON JOHNSON: Well, I think the most fundamental thing, Kimberly, is to have this kind of conversation that we're having today.

Because while economists love to talk about the law of unintended consequences - they rather delight in that, actually, it's a cultural thing. I would say there's also a law of intended consequences. Which basically says if you don't try to do things, they're very unlikely to happen.

So if we're not trying to create more good new jobs and arguing and thinking about, What does that mean? How do we get there? Are these really robust jobs? If we're not having that discussion, then we're not likely to get them.

And, as I said, I have talked to some very senior corporate leaders in this space, in the technology space, Kimberly, recently. Extremely thoughtful people who would be very comfortable with having this kind of discussion. But at the end of the day, you either ask them or they tell you or you can infer from their subsequent actions: what are their priorities? It's to cut labor cost. And this is a technology that's very suitable to cut technology costs.

Somebody I know runs a call center employing US people. It used to take seven minutes on average to handle a call. Now it takes six minutes because the AI handles the transcription of the call and it does it better than the humans do. So that's not only responsible from a business profit point of view.
But in terms of safeguarding customer data, making sure you accurately record what they needed, how you dealt with it, it would actually be irresponsible not to use the better technology. But now you need six people where you previously needed seven.

I don't think that particular person is going to fire people, but they're not going to hire as many people. Attrition will take care of it. If you add that up across the economy - and I know some instances where it's not six in place of seven, it's one in place of 10 - that's a problem.

KIMBERLY NEVALA: If we think about this narrative that says, hey, automation, displacement, it's just inevitable and it will work itself out. The more I spend time on this, the more I realize how callous, or at least my perception is, how callous that perspective really is.

Are there fundamental aspects of this overall narrative, whether it's about how we look at corporate profits, whether it's about do we privilege human over machine priorities in work? You talk a little bit about I think machine usefulness versus intelligence. Are there key components of the narrative that we need to fundamentally shift or push back against to start to prime the ground for this shift?

SIMON JOHNSON: Well, so I just happened to be holding - its on my desk - a speech from 1929 by somebody called Magnus W. Alexander, who was president of the National Industrial Conference Board. He was a leading engineer industrialist of his era.

In this speech, "The Economic Evolution of the United States I:ts Background and Significance," he says that America is great because we work hard. We pay our workers well. They're extremely productive. And then they can buy the stuff that we produce, like the cars out of the auto industry.

It's a concept of shareholder capitalism, Kimberly, which has slipped away from us. Shareholder capitalists become somehow much more about cut, cut, cut. Fire the workers. Lay them off. Reduce the cost. As opposed to how do we have a stronger company because our people have acquired more capabilities that enable us to be more productive, and therefore, we're more competitive.

So I think, yes, we are calling for a change in how we think about capitalism and how we think about the market. That is much more people-centric and much more about people as a resource, as opposed to labor as a cost. That, I can tell you, as I work in a business school and talk to lots of people in this space, that's a big hill to climb. But we have to climb that, among other things.

KIMBERLY NEVALA: Is there an implication in there that back in the day, when we talked about shareholder capitalists, that some of the shareholders we were talking about were the employees? They were the workers, not just the investors, at the other end of the stick.

SIMON JOHNSON: Well, according to the Wall Street Journal today, Kimberly - it's on the front page – we have I think it's over 60% share ownership in this country. That is Americans who own some shares. That's the highest rate ever. Partly, that was because of COVID. People were staying at home, and they decided to play the market.

So many people do own shares. But the people who make the decisions have interpreted their mandate to be one of cut, cut, cut. Cut labor costs and outsource and automate as much as possible. Some of that is unavoidable. Some of that is commercial logic.

But where's the countervailing force? Where is the creative push to do whatever it takes to generate more good jobs? And to do that in a way that's sustainable, a way you can finance, a way that is consistent with innovation, a way that's consistent with American competitiveness in the world economy.

KIMBERLY NEVALA: Do you think that there was more intrinsic value put on the importance of human agency, human dignity in the workplace before? In that, as we get into this narrative where machines are as intelligent as humans - AI is as smart, AI can now pass the bar - does that further harm perceptions of worker value or just innate human value in the workplace?

SIMON JOHNSON: That's a good question. So let's be very clear. For a lot of human history there was not a lot of worker dignity. There were various forms of labor oppression, some of them more awful than others.

What happened was during the industrial revolution, after productivity rose and after companies discovered that they wanted workers to have more skill and that could actually deliver value to them, and trade unions insisted on it, and there was political reform, we got more emphasis on human dignity and respect than ever previously. At least since the beginning of settled agriculture. But that's then slipped away from us, Kimberly. And we lost that since 1980.

So all of human history, not much dignity at the lower end. 5% or 10% of people do very well. From 1850 to 1980, perhaps, there's a rise of that dignity. But then there's an erosion of it.

And now you raise a very important question. Which is, if I'm using AI to upskill my lowest skilled workers in my call center or whatever, and I don't need, therefore, the more skilled workers. Well, that's a little bit troubling. That's going to affect something about the distribution of wages and so on and so forth.
But perhaps that stage is just an interim step to replace all the workers in my call center because that's the commercial logic. And, in that case, we're looking at something even more deeply disturbing with, yes, commensurately less dignity and respect, presumably.

KIMBERLY NEVALA: So there's good news and bad news in there, which I suppose --

SIMON JOHNSON: Well, that was mostly bad news. What I just said there was bad news.

KIMBERLY NEVALA: [LAUGHS] The good news is we're not actively decreasing how we value workers. The bad news is we've just never valued them very much to start with…

SIMON JOHNSON: Well, they were valued in the '50s, '60s, and '70s. Workers were more valued. We took our eye off that ball a long time ago, it turns out. But also, I think, Kimberly, without it being a conscious decision. It wasn't like a big political moment. Nobody said, let's rise up and crush all the workers.

No, no. That's not what happens. It was a lot of people doing their jobs, doing their jobs well. I mean, sometimes that's the most dangerous thing in human history.

KIMBERLY NEVALA: [LAUGHS] I think that's very true. You mentioned countervailing pressures. That this doesn't need to be inevitable, that this doesn't need to be the narrative that we lean into and, in fact, bring to fruition. There's that old meme about what you predict comes to pass.

So by having this narrative that tells us that automation, displacement, even quantification of the human is inevitable we lean in, and to some extent, folks give up. And it can feel very overwhelming right now. There's so many systems working under the scenes, behind the scenes. Just trying to interact in digital life, the idea that we can push back and have a level of agency seems almost quaint, to some extent, to folks that I talk to.

What does positive countervailing pressure look like? And what are the mechanisms that we can use to promote broader engagement and discussion and more, I was going to say productive, discourse, but that may not be the right adjective in this context.

SIMON JOHNSON: Well, it has to be political, Kimberly. I think you have to start at the top and that's where the decisions are made.

The good news is that we have in the past had a robust democracy in the US. I mean, I understand there's a lot of shouting. But I'm an immigrant to this country and I like the American system. I find it much more open and much more accessible than where I came from.

There's also a discussion in Washington about what to do about AI and how to get more of the good stuff and less of the bad stuff. I'm not sure that this round will reach particularly helpful conclusions on that. In America, we don't tend to do preemptive regulation. We tend to be reactive. So I think we'll have to see more things happen and then respond to them.

But I am hopeful that jobs and job creation can be more central to the process. I know that we're 330 million people in one of, well, the most innovative large society ever. And we live in a world of 8 billion people who have a lot of problems. So focusing our attention on innovation and creating the new, and then figuring out how to provide that to the world, sell that to the world, help the world. That's a very good statement of who we are and who we can become. That would have a lot of roles for people: fulfilling roles and roles across all income levels.

In addition, if we can find ways to use AI to augment human capabilities. Including for nurses, teachers, electricians, plumbers, all kinds of professional activities and manual work included in that, then I think we're much more likely to have more shared prosperity and better decades.

KIMBERLY NEVALA: Do we have the civil structures, the political structures and the organizations in place today? Or are these things that we need to build back up and/or are there new "institutions” required for this?

SIMON JOHNSON: It's a good question. It would not be a surprise if we have to build some new things.

Data collectives, ways that we protect our data, ways that we establish our privacy, ways that we can control who uses our data to do what. Those seem like a good idea which don’t really exist, not in a strong form.

I think within the union movement focusing on how to develop technology in ways that's pro-worker. That's a little bit of a missing piece. The problem is if you just get higher wages - I mean, I'm not opposed to higher wages, as you know, that's a very good thing. But if all you do is get higher wages, you're raising the incentive to management to replace workers with machines. So what you want to have is workers being trained to use those machines. You also want technology to be developing in a way that's going to boost the demand for labor. That doesn't fall within the traditional union remit, but I think it's a very important element.

And more emphasis from technology creators, of whom there are many, not just big companies, on augmenting workers skills with AI. That's a fantastic field to go into and something that I urge upon all MIT students of all kinds, for example.

KIMBERLY NEVALA: Are there key policy elements that you think are missing or require more focus these days? To, again, help us change maybe the narrative itself, and therefore, the objectives and incentives?

SIMON JOHNSON: Sure, absolutely.

I think we should do some more grand challenges on worker augmentation. DARPA, for example, had a big impact on self-driving cars with a small amount of prize money. But change the narrative. I think we should do something similar - not with the Department of Defense, necessarily - encouraging people to think about worker augmentation.

Safeguards on surveillance, particularly workplace surveillance, would be a very good idea. I think if it makes us safer at work, fine. If it makes people cut more corners, be more dangerous, have more warehouse accidents or car accidents while delivering things, then that's not acceptable.

And government needs to build up its own AI capability. To really bring AI into the delivery of government services, which would be very helpful to many people, and also help the government stay up with at least what's possible in terms of constructive use of technology.
Right now, most of the AI talent has gone to work in the private sector. And I don't mean in the universities even. I mean in the private sector, the for-profit sector. That is a bit of a problem when it comes to having sensible discussions about what can happen and what should be allowed.

KIMBERLY NEVALA: I'm going to be good: I could have you here for an hour on each one of these things we've touched at a very top level. My last question will be: what is not being asked? What question do you think needs to be asked or discussed more thoroughly or with more vigor as we move forward?

SIMON JOHNSON: I think a very good question, Kimberly, for people like you to ask is, ‘who is not at the table’? Who is going to be really affected by this or who is going to miss out on an opportunity or be crushed because we never listened to their voice?

We can see that so many times in the past with new technologies. Sometimes it was deliberate. Sometimes there was oppression. I understand. But a lot of times it was just like, oops. Oh, sorry. Didn't invite you. Forgot that you might be interested in this. Didn't know you cared. Everybody cares about technology. Everybody would like to get something.

This issue of design and deliberate design: what is it you want technology to achieve? How can technology help you? Bill Gates likes to say that he's never met a problem he can't solve with technology. I think our problems are much more sociological than technological. But I do think he's right that bringing a technology with the right political support, the right buy-in from civil society, the right marketing, perhaps, that can be quite transformational. But for whom? So who wants what from technology, and how do we get their voices heard? There are a lot of legitimate voices that have interesting, important ideas about either using technology better or using it more appropriately, using it more carefully.

We haven't mentioned Rachel Carson in this otherwise far-ranging conversation, Kimberly. Rachel Carson’s Silent Spring is, to my mind, one of the most important books of the 20th century or maybe any century. Because what she pointed out was not only has this well-intentioned technology around pesticides and DDT gone massively off the rails. But the government was absolutely in cahoots with industry on this and was ignoring even their own studies because…well, she explains it in the book.

When the book came out, as you know, she spoke about the end of the '50s and early 1960s. That was the heyday of techno optimism of a certain kind in the United States, a kind that was sort of broken by the Vietnam War, actually. But it's back in a different guise. A techno-optimism where we say, look, we'll do this technology, invent these things, it's all going to be great, don't worry about anything. That's not how it's worked out in the past. There is at least unintended consequences: see Rachel Carson. There may be a lot of missed opportunities. There may be a lot of people who are crushed by the way we use technology.

This is not 1800 or 1850. We're a lot richer. We have more things. We have more wisdom, supposedly. Can we use that wisdom to find a path that has more robust, shared prosperity? I don't see why not.
KIMBERLY NEVALA: Well, we'll end on that note. It seems like a nice positive point. I suppose it circles all the way back to what you referenced in the beginning, which is bringing more diverse perspectives. Getting more people to the table ensures that we're establishing a vision and an agenda that, in fact, does work for all and not just the privileged few driving the development of the tech itself.

SIMON JOHNSON: Yes, absolutely.

KIMBERLY NEVALA: Awesome. Well, thank you, Simon. These were fascinating perspectives and timely critical cautions on the prevailing narrative shaping AI discourse today. I am heartened by the work you and others like you are doing to ensure that we chart a course that, in fact, benefits all and not just the privileged few. Thank you so much for both the book and coming on today to share your perspectives.

SIMON JOHNSON: My pleasure. Very nice talking with you.

KIMBERLY NEVALA: To continue learning from thinkers such as Simon about the real impact of AI on our shared human experience, subscribe now.