Pondering AI

A retrospective sampling of ideas and questions our illustrious guests gifted us in 2025 alongside some glad and not so glad tidings (ok, predictions) for AI in 2026.

In this episode we revisit insights from our guests and, perhaps, introduce those you may have missed along the way. Select guests provide sparky takes on what may happen in 2026.

Host Note: I desperately wanted to use the work prognostication in reference to the latter segment. But although the word sounds cool it implies a level of mysticism entirely out of keeping with the informed opinions these guests have proffered. So, predictions it is.

A transcript of this episode is here.   

Creators and Guests

Host
Kimberly Nevala
Strategic advisor at SAS

What is Pondering AI?

How is the use of artificial intelligence (AI) shaping our human experience?

Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.

All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.

KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala.

Now as some of you know, hosting the pod is the best job I never wanted. Which is all down to my fabulous production team and, of course, our illustrious guests. Who not only answer my calls but take the time to share their unique perspectives and work in AI with all of us. Despite my sometimes awkward questioning.

So in this episode, which is our last for 2025, we revisit some of the insights gifted by those guests and find out what they think may happen next.

So to start, here's a small sampling of what they had to say this year.

ASH WATSON: There remains such a gap between how we're talking about technology. What the promise of technology seems to be and the actual realities of the material current worlds that we live in today.

Really meaningful development and innovation can happen when we, yes, get outside the lab, get outside these often traditional ways of working that designers can have where they have this great technology. And they go in search of a problem to apply it to.

HIWOT TESFAYE: I prefer to participate in conversations around inclusivity that assume there is value and beauty, I guess, and unexpected innovation that can occur in any sort of place and across many groups of people.

So I think it's just important to-- at least for me, it's always important to stay curious and not make these wild assumptions about where value can be generated or where beauty can come from. I believe it can come from anywhere, really.

It's just a matter of, do people have the right serendipitous opportunities? And are they in the right place at the right time to generate that value, or to be seen by the people that can either elevate them to the next level or just acknowledge the work that they're doing.

PAULA HELM: There are a lot of tricky questions to be asked also with these matters. And they all in a way boil down to the issue of power asymmetries, power inequalities.

If you now, you are a big tech company, you want to bring your technology to people around the world.
Then you don't involve the speakers themselves. Yeah, you really risk going over their epistemic self-determination, their capacity to determine for themselves how they want to express themselves, how they want to know the world and how they want to make themselves understandable by others.

KATI WALCOTT: There are some places where we can see the early signs of feudal economics. Where compute lords own the land. So there are digital producers, the data producers who are digital peasants.

I think we need to break the cycle. And we need to require the enforceable rights framework at the data level, not just at the regulation on top because we need to make sure that AI does not become a technological monopoly, but a opportunity for economic sovereignty and a equitable distribution of opportunity and wealth that comes from this incredible technology.

PIA LAURITZEN: But each of us have things that calls for thinking. And for me it's important to keep that in mind: that there is a what that needs to be thought about. And then there are lots of whats that are just competing for our attention. And we should be really good at saying thanks, but no thanks, because this is what calls for thinking.

And the second thing we should be asking - also as Heidegger put it - the next question we should ask is, so who can we think with? So what calls for thinking? And who can we think with? Because what calls for thinking cannot be thought about with AI. It calls for another human being.

ERYK SALVAGGIO: There's a couple of questions I think that are really important.

And one of them is, what are we actually referencing in these things? When we are relying on an algorithm to answer a question about the world, how much of the answer is the algorithm? And how much of it is the world? I think that's a really important, critical question to be asking in all kinds of ways.

The other question I think that's really related to that is: what are we doing that we are not doing when we use AI? Where do we get lost, right? What is getting taken away from us when we rely on the algorithms to a blank page on our behalf? And where might we find some confidence and courage to express that agency, even if it is faulty, even if it is flawed? Where can we engage other people to strengthen that, as opposed to relying on, fundamentally, an abstraction of the average, right?

HELEN BEETHAM: And the mantra is, well, teachers can't be there all the time. Nurses can't be there all the time. But the robot, the language model is never asleep. It's always there. And that's the kind of mantra I think we should really problematize because, yeah, there is something always there that can't be turned off, that never sleeps, that never forgets. But that's not care. That's not teaching. That's not attention. And these are the things that don't scale. And let's value them more for seeing how poorly these systems that have had more dollars thrown at them than any technology in the history of humanity, how poorly they do at those things.

HENRIK SKAUG SÆTRA: I think it's trying to take back some sort of relationship with the concept of democracy. What we think democracy is and think about why we think it's important, if we think it's important. Because it's become something of this euphemism for everything good, which is why people say that we can have democratic AI. We can democratize everything. We can do everything, kind of. Democracy is good, without ever really thinking about anymore why it matters. And what's at risk of being lost if we don't have the kind of democracy that we once fought hard for?

And then doing what we can as individuals, of course, to choose the platforms and services and do what we can to promote the technology we think is good and conducive to it will be important. And then to try to imagine how we can use technology in better ways that don't diminish or narrow down democracy into something quite technocratic.

DIETMAR OFFENHUBER: But for me, it really comes down to widening our understanding of data little bit more.

That data is not always a digital file. It can be a physical artifact. It can be a physical trace. Data is not always a data set. Is not always something to be analyzed through statistical means. It can be something that plays a role in how people debate a certain issue in all kinds of other ways.
Data can be provocation. Data can be a prompt. Data can be a counterfactual scenario.

So thinking about data a little bit outside of these very narrow confines, I think, is one way how we can talk about all of these scary aspects and dangers maybe a little bit more nuanced.

SUSIE ALEGRE: I think one of my biggest concerns is the creation of dependencies unthinkingly.

And so what I hope is that this will allow a pause for thought to think about, where does AI really add value? How can AI be sustainable? And that's not just profitable. It's actually sustainable and actually useful for us and for future generations. And I think that does require a really hard pause in certain areas to really resist the siren call of innovation, to stop and think about actually, what innovation do we want?

So my hope is that by turning this lens to think about people and to think about human rights and to think about humanity, that we might start asking those big questions and prioritizing people before technology.

MICHAEL STRANGE: This is a cultural shift as well, right? Because a lot of the discussion around ethics and trust, it's seen kind of like having to do your homework. Something frustrating have to do when you'd much rather do other things.

But if we can find ways in which it can actually empower, empower the technology, can improve its-- think about it in terms of, how does it make the technology more resilient to a changing world? How can you make it more adaptable to very different contexts, very different populations? Then you can also see a way in which you can, in effect, monetize this. You can show there's an economic advantage.

STEVEN KELTS: Where the rubber meets the road, where the ethical criticisms that we all are attuned to, where that meets the road, where risk management frameworks meet the road is really with the dev team itself. So empower your engineers to have these discussions.

Those who are invested in it will show themselves. They'll help to steer the process in new and better ways. They'll become better at foresight and anticipation of potential social problems down the line. And hopefully we'll see a better product environment in the future because of that.

RAVIT DOTAN: And I think we need to dive more deeply into, quote unquote, the "less technical" end of things and more the user end of things.

What kind of processes would it be legitimate for me to even build? And I want to go back to something I said earlier on this call. Use cases. What should I build? And what should I not build? There should be a conversation about that because building processes is no longer the territory of only engineers.

ROBERT MAHARI: We're probably all, as a society, going to have to become more comfortable with probabilities because you hear this thing all the time where it's like, well, we can't guarantee it. So we won't do it at all. And that to me is saying, like, well, there's a chance that your climbing rope will snap. So you should probably climb without one. It's like, really? That's your conclusion?

ANDRIY BURKOV: Every time it tells you yes or it tells you no it's a gamble for you as a user whether you trust it or not. Because when I use it, for me, it's always a question of how important would be a problem for me if I blindly trusted this answer?

PHAEDRA BOINODIRIS: I think people don't understand how hard it is to get right. I think that is key. And why most of this conversation has been about education and about literacy and about culture. People just don't understand what it takes to get this right.

And I think now that we're shifting into the next stage which is agentic AI, where you have an AI that acts as an agent on your behalf, it might take the input as an input, the output from another model. You have fewer humans in the loop, fewer potential for human oversight for these different models. It's even more critically important that we talk about things like accountability.

And I can also tell your listeners that what we've been describing, what we've been talking about, like, this affects everybody.

REBECCA PORTNOFF: What I can say is that when it comes to this question of who's responsible, I think the answer has to be all of us, that all of us are responsible.

And the level of responsibility that you have is not tied to how much harm you've done, necessarily, but how much power you have. What is your position in this society? What is your position in this world? How much are you able to enact and make happen?

And that's something that I really want to continue to bring forward in these conversations is that there are so many practical and tactical things that all of us, and especially these tech companies can be doing to prevent this kind of misuse.

RYAN CARRIER: And when we demand responsible tools. When we demand large language models, ChatGPTs that don't hallucinate, that don't engage in false sources, that are robustly disclosing to us what the risks are. Then we're going to get better results. And so there's a lot of power in the retail aspect.

But it's not something we feel, right? Because it's one little person. You're like what difference can I make? And the answer is, well, you can make a difference if all the people around you are doing the same things. And we eventually get a better result.

OLIVIA GAMBELIN: What do you value?

And instead of discrediting that, instead of putting it to the side, instead of saying, I'll do that in 60 years or hopefully that'll become a part of it, ask yourself, how does that reflect into your work? How has that led to your current success? And how can you lean into that more? And that's really where I see people both get that joy out of their work but also build really amazing companies and products.

KIMBERLY NEVALA: 2025 has been tumultuous, to say the least. And 2026 is shaping up to be every bit as interesting. Here's what some of our guests think might happen next.

JORDAN LOEWEN-COLÓN: I mean, it's like, I love this question. And I'm trying to rack my brain now. Like, yeah, what would I offer? Because I'm not necessarily, like, the most intense of doomers. There's a lot of folks out there that would offer, like, doom level prediction. I'm also not an intense hyper-optimist. What would I like to see? What would I like to see?

In a weird way, I mean, how dreamy. I was going to say, world peace! I was going to say I'd like to see the things slow down. Like, the promises of AI scaling as rapidly as they say it is actually slows down. And things stall out. And the stall creates time for the US and China to think more collaboratively about how to approach this. And it seems like China is almost a little bit more open to this at this point right now.

But we open the door for communication and setting some global standards about what it is we care about and preventing potentially the worst possible outcomes. But I think that's way too pie in the sky and impractical and unlikely. So I would put that as happening as maybe 20%.

But what's a more likely prediction? I think probably a more likely prediction is that a major company - either banking, fintech, or health company, possibly, like airlines - one of the major industries is going to have just a major, major AI catastrophe. Either where folks die, like, human lives are lost or significant financial impacts happen. Some bot goes haywire and starts selling stock in a way that it shouldn't or the market gets disrupted. But I think it's going to be relatively isolated.

I think my prediction would be it would just be one company and not, like, a whole industry. At least I hope so because I'm like, if a whole industry ends up tanking, then we're in trouble for a whole lot of ways. But I think it's going to be as kind of, like, a canary in the coal mine. And so my hope would be that that wakes and shakes people up, especially in policy and government to start possibly regulating it a bit more. And I'll put that at 50/50. I'll give that a 50/50 shot.

MASHEIKA ALLGOOD: I don't think it's rosy. [LAUGHS]

But I think we'll see a lot less data center activity and unfortunately a lot more economic pain because of it. I think we're over-inflated. And I think 2026 becomes a year of correction when it comes to data center builds.

And so what I would like to see is innovations around the heat exchange. If we can create new processes that allow for heat exchange that is not water-intensive and also not, in some other way, pollutive or extractive and doesn't require a lot of electricity to run.

If we can somehow start innovating around these areas where data centers are problematic. So innovating around how hot the chips run so that they don't require as much water or as much electricity. Or if we can innovate and find new ways to build or focus on old ways to build AI technology that don't require as much compute power.

I think those kinds of innovations would be very welcome. And I think they would also drive less data center activity, but hopefully less financial pain from that loss of activity.

MAXIMILIAN VOGEL: I think that the bubble will at least deflate, maybe not completely collapse. But it will be deflated.

Not everybody will talk on agentic AI. But, maybe on the other side, more companies will think about really implementing stuff and not talking about that and having this inflated expectation. But implementing small things to get into that and be part of the 5% or something like that. This will be one thing.

I have a little bit of a more optimistic prediction and maybe more futuristic prediction. Maybe not in '26 but at least in '27 we will find mechanisms that AI agents will build themselves partly. That there will be less engineering. At the moment, the cost for AI agents in smaller use cases are prohibitive. We are thinking about that. And others are thinking about that as well.

And there is one more thing which is even maybe a little bit more futuristic. Many people don't know that the models don't learn at the moment. So they learn because they have the conversation history in their prompts. And maybe OpenAI and co save some things of past conversations and put it into the history.
If we had models which could do a little bit of an advanced learning. For instance, agentic processes that when something fails, some processing fails, gets a message from the supervisor this wasn't correct. And builds that again in an automated fashion without anybody doing prompt engineering or so into its agentic system. That would help greatly. There are ideas how to do that but it's not product-ready at the moment.

KEREN KATZ: I fear we will see a real big AI security breach. I hope that I'm mistaken.

But due to all factors that we've seen lately, I feel like it will be coming. And it will be published. And people will be talking about it. I hope that I will see more and more vendors getting educated about AI security and getting a plan on what to do when the AI security incident will happen, because it's already coming. And it's already here. I think, sure.

But I feel like people and professionals are trying. I see practitioners really trying. And they are confused from the vast information you get online. So my one advice would be just stay focused. Choose your one source of truth. OWASP is great. CSA is great. One professional or practitioner you believe and you trust. And then follow him or her and just go with one strategy to deal with it instead of just running around and being afraid of it.

CHRIS MARSHALL: I think it's quite likely there will be some sort of minor correction. I have no idea how big the correction will be. I think people should expect it because we've seen it with virtually every technology in a revolution over the last 50 years. So expect that.

But I think keep in mind that the benefits of AI will take longer than you think. They are real. But they will take longer than you think. And there's a sense of having to be in there for the long term if you're going to be serious about deploying the technology at scale.

KIMBERLY NEVALA: Right or wrong, ready or not, here 2026 comes.

Before signing off to greet the New Year, I want to give a personal shout-out to our amazing production team. This includes Maddy Werner who keeps this train on the track, Jessie Olley who makes us all look and sound great, Johnathan Eshleman who makes it all public, Michael Penwell who wrangles YouTube, and Evan Markfield who helps with all of the above and blogs about it to boot.

I will also be eternally grateful to Chrissy Richardson who composed our original soundtrack and worked ceaselessly in the early days and years to make us look and sound great.

So to continue this adventure with us in 2026, subscribe now You'll find us wherever you listen to podcasts and also on YouTube.