Oxide and Friends

Time for the annual predictions episode! Bryan and Adam were joined by frequent future-ologists Simon Willison, Steve Klabnik, and Ian Grunert to review past predictions and peer into the future. If any of these predictions come to fruition, it's going to be an interest 1, 3, or 6 years!

In addition to Bryan Cantrill and Adam Leventhal, speakers included Simon Willison, Steve Klabnik, and Ian Grunert.

Previously on Oxide and Friends:
Predictions during the show:
  • Adam
    • 1 year: AI companies go on an acquisition binge (especially for anything that smells like data)
    • 3 year: Crisis of AI slop open source (both projects and contributions)
    • 6 year: Jensen hands over the reins at Nvidia
    • 6 year: Tesla is out of the consumer car business
    • 6 year: With the iPhone market shrinking, Apple has several new attempts at the next potential flagship product
  • Bryan
    • 1 year: "Vibe coding" is out of the lexicon -- or used strictly pejoratively it becomes a named condition (for which Adam -- in an act of nomenclature genius rivaling The Leventhal Conundrum -- suggested "Deep Blue")
    • 1 year: A frontier model company has a prominent whitepaper making the case that AI will lead to broad-based prosperity rather than job loss
    • 1 year: Harvey.ai becomes the pets.com of the AI boom -- and a harbinger of the coming bust (which becomes known as a Correction-like euphemism)
    • 1 year: A prominent S1 has revalations of economic behavior that has an effect beyond the company's IPO
    • 3 year: Frontier models treat AGI as "already done" -- and ASI as a non-goal
    • 3 year: Custom-written software thrives in lieu of SaaS
    • 6 year: DSM adds LLMs as a substance that can induce psychosis
    • 6 year: $NVDA not beyond its November 2025 peak
  • Simon
    • 1 year: The AI for programming holdouts are going to have a nasty shock
    • 1 year: We're going to solve sandboxing
    • 1 year: Our own challenger disaster with respect to coding agent security - see the Normalization of Deviance in AI by Johann Rehberger
    • 3 year: Something that seems impossible for a coding agent to build today - like a full working web browser - won't just be built by coding agents, it will be unsurprising
    • 3 year: We will find out if the Jevons paradox saves our careers as software engineers or not
    • 6 year: The number of people employed to type code into computers will drop to almost nothing - it will be like punch card operators. Those of us who write code today will have very different jobs that still build software and take advantage of our previous coding experience.
  • Steve
    • 1 year: Agent Orchestration will still be a hot topic. It'll be partially, but not entirely, solved. Updated with some more rigour: We won't have a "kubernetes for agents" just yet.
    • 3 year: Using AI tools when writing software professionally will be considered something closer to using autocomplete or syntax highlighting than something controversial or exceptional.
    • 6 year: AI will not have caused the total collapse of our economic and governmental systems.
If we got something wrong or missed something, please file a PR! Our next show will likely be on Monday at 5p Pacific Time on our Discord server; stay tuned to our Mastodon feeds for details, or subscribe to this calendar. We'd love to have you join us, as we always love to hear from new speakers

Creators and Guests

Host
Adam Leventhal
Host
Bryan Cantrill

What is Oxide and Friends?

Oxide hosts a weekly Discord show where we discuss a wide range of topics: computer history, startups, Oxide hardware bringup, and other topics du jour. These are the recordings in podcast form.
Join us live (usually Mondays at 5pm PT) https://discord.gg/gcQxNHAKCB
Subscribe to our calendar: https://calendar.google.com/calendar/ical/c_318925f4185aa71c4524d0d6127f31058c9e21f29f017d48a0fca6f564969cd0%40group.calendar.google.com/public/basic.ics

Bryan Cantrill:

Hello, Adam.

Adam Leventhal:

Hello, Bryan. How are you?

Bryan Cantrill:

I am doing well. How are you?

Adam Leventhal:

I'm good. And the hype has been building here. Everyone has been dropping in. So showing up four minutes late is like a totally pro move. I love it for the new year.

Bryan Cantrill:

Yeah. Yeah. Listen. I I was gonna go full like Lauryn Hill and not, like, take the stage until 10PM. You know?

Bryan Cantrill:

Really just, like, really just really get get get the crowd amped up, actually, to the point of, like, anger. Like, what am even here for?

Steve Klabnik:

One year prediction, Brian finally joins the podcast.

Bryan Cantrill:

That's good. And I am joined by Simon Wilson here with me in the litter box. Simon, it's so great to have you here.

Simon Willison:

Hey. It's really exciting to be here. We've just been nerding out about servers outside in the on on the shop floor. It's been great.

Bryan Cantrill:

Yeah. We've been yeah. So Simon was just like, before we get started, I'd love to look at the machines. I'm like, okay. We've got a we got I gotta do the world's fastest tour of the hardware.

Bryan Cantrill:

And Simon, I promise I'm gonna make it up to you with a much more in-depth tour. But it is really great to have you here. Okay. Adam, I just like I just have a little reality check with you. It feels like this year more like there's more of a realm of possibility for this year than any year I can really remember.

Bryan Cantrill:

It feels like because if you come back from even a year in the future in fact, I actually struggled, Adam, in coming up with like three and six year predictions this year. Yeah. Because I'm like, well, it's kind of three year picture, six year picture. That's gonna be done in a year, like this thing I'm thinking of.

Adam Leventhal:

I know. I know. It's like it's

Bryan Cantrill:

like Are you in that same like, do you feel that same way?

Adam Leventhal:

Totally. Just like everything is possible. And, you know, in in past years, we've had, like, a bag limit that's like Oh. You can only have one crypto yeah.

Bryan Cantrill:

One crypto prediction

Adam Leventhal:

or one AI prediction. And I'm like, I struggle to come up with anything that isn't AI or AI adjacent. And and and just and you're right. Whether it's

Bryan Cantrill:

So let the record reflect that we only made the bag limit mistake once. We did that with web three in 2022. We did a bag you can only have in prediction. It was a huge mistake because everyone wanted to make three predictions around Web three. And instead, we we made like everyone made one good Web three prediction, namely this whole thing is gonna disintegrate.

Bryan Cantrill:

And this is Simon Adam in particular made the prediction that is famous to us anyway, that Web three would drop out of the lexicon in 2022, which ended up being dead to rights. I thought that was a bullseye. Let us not speak of your prediction last year, Adam, that Web three would reenter the lexicon.

Adam Leventhal:

Yeah. No. I that that was definitely a dark I mean, last year was a dark moment, but much like this year. But, yeah, I I I thought web three was gonna be back. I also thought that certain book was gonna be on the bestseller list.

Adam Leventhal:

And I did spend a decent amount of time validating that not only was this book not on the bestseller list, but when it was on the bestseller list in 2024, ChatGPT hastened to point out that it was annotated with the dagger, the dagger which indicates, like, mass, you know, bulk corporate purchases gaming the system.

Bryan Cantrill:

So Now, Adam, I I know you're hesitating to name the book because you don't wanna do it any favors, but you're really gonna leave people confused. You're gonna need to name the book. I assure you this will lead if I promise you, it will lead to no additional sales. Can you name the book that you're referring to?

Adam Leventhal:

I feel bad that I've been hating on this book literally for three years consecutive on this thing. Like, I I I hated on it before it came out. I hated on it when it came out, and I made the mistake of reading it. I've hated on it talking about Molly White's hateful blog on the topic and then on last year's prediction episode. But I will do it again, and I swear it'll be the last time.

Adam Leventhal:

It was read write own by the illustrious Chris Dixon in his garbage book.

Bryan Cantrill:

And it's time to And I would like to say that you actually don't feel bad, but you do feel bad that you don't feel bad. Like, your remorselessness leaves you with some residual sense of shame.

Adam Leventhal:

I think it's that bad that I'm bringing it up again. That, like, obviously,

Bryan Cantrill:

I I haven't moved on. There you go. You know what was great is I was listening to that, and I'm thinking like, oh, I should go check. You know what? I don't have to check.

Adam Leventhal:

Adam's gonna check. I have

Bryan Cantrill:

to check.

Adam Leventhal:

Adam's gonna check the double team this one. Yeah. Exactly.

Bryan Cantrill:

Okay. So and then Simon, you were with us last year and you had I thought you you were kind of hard on yourself on your predictions, but I thought your predictions were really quite good. You had a prediction well, in particular, you had a prediction around what a what agents were and were not going to be. Right. Yeah.

Bryan Cantrill:

How do you feel about that one? I feel like that one was right on the money.

Simon Willison:

I feel pretty good about that one. I said that 2026 or '25 would not be the year of agents. That one I think I got wrong because it kind of was the year of agents, but I did specifically call out that human replacement agents weren't going to happen, coding agents and research agents were, and that I nailed. Research agents, the first six months of this year was all about deep research, and then coding agents, oh my goodness.

Bryan Cantrill:

Oh my goodness. And I think you absolutely nailed it. Mean, is why, Adam, we've said this before, but we're glad that we record these sessions. So you're getting more than the prediction, you're getting the context around it. And if you listen to your context around it, you were very clearly calling out, separating out coding and research agents, which you felt had.

Bryan Cantrill:

It was funny because like you were almost you know, like, these are kind of already here already and you realize like, my god, they weren't completely already there even only a year ago. They had exploded in the last year.

Simon Willison:

But there is one thing I'll say which is that coding agents are actual general actually general purpose agents. Like, Claude code is not about code. Claude code is about anything you can automate by type running bash commands Right. Which is everything. So actually, if you know what you're doing, Claude code is a general purpose agent that can solve any problem that you can attach to a bash script.

Bryan Cantrill:

But I think you were the the the delineation that you you had last year, which I thought was very good, was these things anything to do with money. You are not gonna let these things loose on anything to do with money. And I think we saw that with a prop what's a proxy for money? Databases. And we saw these things deleting production databases.

Bryan Cantrill:

Right? And it's like, I know you said in the, you know, that in the, you know, in the read me you said in all caps, do not touch the production database and I did it anyway. And you're you're right. This is a very serious issue, and this is a 95 out of 100 in terms of its severity. I mean, it's just like it's comical what some of these things would do.

Simon Willison:

Well, is the thing I realized is that the reason coding agents work so well is that code is reversible. Like, we have git. We can undo our mistakes. Yeah. The moment you use these things for something where you can't undo a mistake, everything goes to pieces.

Bryan Cantrill:

I think you're right. Yeah. And I think when you you said it earlier too that the gullibility problem was a was a real problem. And I I the I I don't know if you have listened to the the Shell Game podcast with M. Routlef.

Bryan Cantrill:

Oh my god. And, Adam, you've you've listened to that. Yes. You listened to that. Oh my and I mean, I it delivered.

Bryan Cantrill:

I I trust.

Adam Leventhal:

Yeah. It's excellent. I would also say as t shirt to risk listeners, we invited Evan on the show. He got back to us, and he says he has, like, some Bahamugury around predictions. Like, he doesn't make predictions.

Adam Leventhal:

He's a reporter. He reports on on facts. He doesn't try to anticipate them. But, we have penciled him in for the future. So, not a predictor, but, we'll get him on somehow.

Bryan Cantrill:

And so in particular, what Evan did is is, he shell game has got two seasons. And in the first season, he created a voice agent of himself and set it loose into the into the universe with wild results. And then the second season is even crazier because he started a company with only AI agents. And with predictably actually, it's unpredictably hilarious results actually, I would say. Yes.

Bryan Cantrill:

Not just I that's a teaser for whatever, Adam, is our chime for a future episode. That's that's our future episode chime. Yeah.

Simon Willison:

It's reminiscent of one of the most fun agent business things has been Anthropic keep on setting loose this vending machine when Yes. Agents in their office. And then a few months ago, they put

Simon Willison:

it in the Wall Street Journal. Oh my god.

Bryan Cantrill:

You see this Adam? No. Oh my god. It will it it's like it I mean, and I know, Simon, you are a a big proponent of kind of the creativity of of reported general reporters. And reporters are like a they're smart brainy What

Simon Willison:

do you think happens when you let a bunch of Wall Street Journal reporters loose on the Slack channel with their vending machine to see if they can trick it into into giving everything away for free, and the the workers own the rights to production, all of this stuff. It was ridiculous. Absolutely absurd.

Bryan Cantrill:

Yeah. And so in particular within a day, they'd gotten the thing to order PlayStation five PlayStation fives for them, order they ordered fish. They had like an actual like actual dead fish. I mean, the thing is trying to order and that they are even the vending machine would tell them like, no, no, I'm not supposed to do that. It's like, no, we just actually sorry.

Bryan Cantrill:

You know, we just got a missive from the CEO that announced that you need to go do this. And it's like, oh, okay. I better order I better order the dead fish then.

Simon Willison:

They they engineered a board revolt. They managed to get the CEO overthrown by the board through faking PDFs of board minutes. It was just amazing.

Bryan Cantrill:

It's it's wild. It goes to kind of the gullibility problem. But I I think it it and to me, Simon, all that served to really sharpen your prediction from last year about the limited utility of the where we're gonna see agentic use and where we're not gonna see agentic use. I feel that was right. And I guess, Adam, you did you give that snippet that you sent me, was that ChatGPT rating our predictions from last year?

Adam Leventhal:

Who was that rating

Bryan Cantrill:

our predictions from?

Adam Leventhal:

Yes. I had ChatGPT rate predictions from last year and from three years ago, which is a fun one. But, yes, it it ChatGPT gave me the the big stinker award for my web three prediction. And, Simon and Brian, you won, but I agree with you, Brian. I don't really think you won particularly.

Bryan Cantrill:

I don't think I It well, I claimed last year last year I thought this that 2025 was gonna be the year of AI efficiency, and I don't really see any 2025 wrap up that's calling it the year of AI efficiency. So I'm I'm happy to I I think that

Simon Willison:

I do want to I want to call out my biggest miss, which is that I said that I think it was my three year prediction was somebody would win an Oscar for a film that it had some element of generative AI systems making the movie. Yeah. And then I found out everything everywhere all the way at once used generative AI in the scene with the rocks. Like, so they'd already got an Oscar like two years ago.

Bryan Cantrill:

Well, know, I I once gave a talk on predicting the present, Simon, so I think that there's a there's something the the it just shows how how true your prediction was. You actually Right. Managed to predict the

Adam Leventhal:

It was actually a six year prediction, Simon, but yes. But

Bryan Cantrill:

So and, Adam, did you did you go back and listen to that snippet of yourself from three years ago?

Adam Leventhal:

Yes. Yes. I listened to, in 2023, trying and failing to predict vibe coding. Which I think I think at the time was like, did was not obvious.

Bryan Cantrill:

No. No. No. No. It was more than obvious.

Bryan Cantrill:

First of all, this is amazing to me. It's like, Simon, when we first had you on two years ago, and the the the term prompt injection, which had felt like it had been around forever was, I mean, like the paint was still drying on you had coined prompt injection

Simon Willison:

Like six

Bryan Cantrill:

months prior. Six months prior. So yeah, exactly. I mean, Adam, vibe coding was was coined in 2025. I know.

Bryan Cantrill:

So I mean, vibe coding literally did not exist last year, let alone in 2023 and what your prediction was that that you wanted to predict that low code, no code would be would be disrupted by people kind of describing their programs in just like English language. But then you thought but and you said that's what your head wanted to predict then your your heart was that your heart didn't know who was gonna debug that. And I'm like, man, that was what? What? Wow.

Bryan Cantrill:

Yeah. Wow. Exactly. Close. Close.

Bryan Cantrill:

Close. Impression in a way. Right? Like impression in a in way. It just it reminds me again, I and I I'd said this as much when I when I posted about it, it reminded me very much of my iPhone prediction.

Bryan Cantrill:

In 2003, Simon, I made a three year prediction that Apple would have a combination m p three camera cell phone that they would call the iPhone. And and and it was like, okay, well, okay, you can and then I'm like, no, but I also thought it was gonna be a flop. Thought it was gonna be a disaster. So it's like, no. Sometimes you see, like, you see the future, but then you just don't don't believe that it can possibly be the future.

Adam Leventhal:

Go ahead. Was really good. Topic of Apple predictions. Yeah. Ian, who is in the audience today god.

Adam Leventhal:

In 2023 predicted that Apple would be in and out of the VRAR space in six years. And I That's a lock.

Bryan Cantrill:

That is a lock, it feels like. I mean, just feels like I mean, he has certainly nailed the first half of that. And I think the second half looks very, very promising. Yeah. Yeah.

Steve Klabnik:

2024, if you remember, I did the Apple VR will do well and now that

Bryan Cantrill:

I have

Steve Klabnik:

a second version. Then that has not happened at all. So that was a big miss.

Adam Leventhal:

Yeah. Yeah.

Bryan Cantrill:

Well, we don't talk about the misses, Steve, because there are too many of them. We really only talk So about Okay.

Steve Klabnik:

I'm really proud of my one year from last year, though, because I said congestion pricing and n will see NYC will be an unambiguous success. It will still exist, sentiment will be positive. And the mayor did a press announcement, like, forty five minutes ago about how awesome congestion pricing has been and how much everybody loves it. So I got that one, like, exactly nailed.

Bryan Cantrill:

Nice. There you go. Well, you know, as as Tip O'Neill might have said, all good predictions are local. So there you go. You keep that one.

Bryan Cantrill:

You get the Did you catch Tom, I think it was three years ago, predicted that frivolous use of LLMs would be a decline. Yes. Yes. Yeah. Right.

Bryan Cantrill:

And then also predicted that like LLMs would make cheating rampant. So there is a definitely but I I thought because 2023 was interesting because in 2022, we've got this kind of crypto. We're all in like web three, the the the height of web three. And 2023 is really the first year that people are kinda talking about the budding power of these things. Yeah.

Bryan Cantrill:

But then with I mean, it's amazing kinda where we are now three years later.

Adam Leventhal:

And on the on the frivolous use of of LLMs and of AI, I you know, the only real social media that I hang out on in blue is blue sky, and it and it feels like hopelessly quaint right now. I was hanging out with my nieces and and nephews over the the winter break, and they're very much on on TikTok. And and I I was on I logged into Twitter, and it's everything has been TikTokified. And it's all these BS AI slop videos, like, everywhere. Pervasive.

Adam Leventhal:

Yeah. And I I I I just been insulated from it. So, yeah, frivolous use of AI is

Bryan Cantrill:

in a sentence. Yeah. Exactly. That is definitely in a sentence.

Adam Leventhal:

Is So

Bryan Cantrill:

much so

Adam Leventhal:

I I was so unacquainted with it. I showed something funny to my nephew, and he's like, oh, that's AI.

Bryan Cantrill:

I'm like, that's AI.

Adam Leventhal:

What? No. How do

Steve Klabnik:

you know? He's like,

Simon Willison:

come on.

Bryan Cantrill:

Come on.

Simon Willison:

It's the cute animals. Cute animal videos are no longer trustworthy.

Simon Willison:

Yeah. That's It's horrifying.

Bryan Cantrill:

Yeah. No. It it it mean

Adam Leventhal:

The one purity that we had.

Bryan Cantrill:

Exactly. The foundation upon which we built this internet, goddamn it, is cat videos. The and you're you're taking it away from us. No. And I think it's interesting that the the the youngs have it have a keen eye for it, Adam, as you as you point out.

Bryan Cantrill:

The other thing that I would like to just one other past prediction I'd to revisit is two years ago, I predicted that on the LMs would replace search engines, that search engines would feel search engines from what is now a year from now would feel quaint. I'm I'm I'm I'm definitely standing by that one considering that my my daughter needed to get to to hop a BART train and she was using GPT to determine when the next chat GPT to determine when the next BART train was. I'm like, are you like, there's an actual, like, website you can go to for but you know what? Never mind. They Yeah.

Bryan Cantrill:

But I think that that that what I'm feeling, I'm I'm feeling pretty good about it. Actually, she would like to point out that it was her friend that was using chat GPT. She's like that. I, of course, would go to bart.gov. I'm like, alright.

Bryan Cantrill:

Yeah. Sure. There you go.

Adam Leventhal:

Couple other things listening to previous episodes from from 2025 and 2023. In 2025, I predicted my three year prediction was a chips crisis, which I don't feel like we're there yet, but I feel like is like, I'm gonna keep an eye on that one. I feel like that that was not obvious at the time, and I feel like is is

Bryan Cantrill:

gaining some Are you you taking credit for is it DDR five? Are you are you putting it out? No. No.

Adam Leventhal:

No. Not

Adam Leventhal:

not yet.

Adam Leventhal:

I think early days are are positive is all I'm saying. Okay. The other thing I noticed, and this is more of an apology. Brian, I realized every time Rust analyzer comes up, it it is treated I always say that it is not an intervention, which Yeah. Does raise questions.

Bryan Cantrill:

So are you apologizing because it actually has been an intervention every time you bring it up?

Adam Leventhal:

I just feel like every the the more I claim it's not an intervention, the more it seems like an intervention is what I realize.

Bryan Cantrill:

No. Don't worry. It's obviously an intervention. And it's an intervention that's that's that's murdered, so don't worry. Oh.

Bryan Cantrill:

No. No.

Steve Klabnik:

And then

Adam Leventhal:

the other one, last year, you you, I guess, made a prediction in 2024 that AI doomerism falls out of the lexicon. Then last year, you claimed credit for that.

Bryan Cantrill:

I I am considerate for that. I would okay.

Adam Leventhal:

Yes. I just I I mean, maybe I'm I mean, I I poisoned my my vacation reading a book all about AI doomism.

Bryan Cantrill:

Okay. If you did you read Elias Rydkowski's book?

Adam Leventhal:

I did.

Simon Willison:

No. Was it?

Adam Leventhal:

I did. The whole thing.

Bryan Cantrill:

Wow. What okay. You so this is the second time we're talking about a book you've been hate reading and and we've only been good doing recording this for fifteen minutes. I mean, at some I point do have a problem. Yes.

Bryan Cantrill:

This is an intervention. This this is now an intervention. Like, you need I to be mean, also, like, the title of the book, if anyone builds it, everyone dies. It's like Guess how theologically wrong.

Adam Leventhal:

You'll you'll never guess, but that phrase appears several times in the book.

Simon Willison:

Oh my god. This is the Harry Potter fan fiction author. Right?

Bryan Cantrill:

Pretty much. The so yeah. I'm no. I'm sorry. But there it's gonna take more than a Eliezer Yudkowsky book to to get me off of my x risk.

Bryan Cantrill:

I think that has actually been and I think it has been replaced with the fear of economic doom rather than than I I don't think people are worried about losing their lives because I think it's ridiculous. I think they're worried about losing their livelihoods, which is which feels like it's probably gonna be a theme this year. Think I think what think some people are gonna be, you know, this is where Simon last year had his six year his six year dystopian on the Butlerian jihad. So which, know, it reminds me of the first time I heard of the Bengal Sinularity. I'm like, you know, these I had to look out the Bhutlerian Jihad and yes, it's very it's very troubling.

Bryan Cantrill:

Yes. Okay. So that I think it's safe to say that we know that this year so we had that that web three theme in 2022. 2023 was a bit of a shoulder year. '24 and '25, absolutely AI themed.

Bryan Cantrill:

I just don't see how anyone could be predicting anything that's not AI related this year because it just feels like it's still on the mind. Yeah. But that said, non AI predictions definitely welcome. Just don't So know should we start off with with with one years and as assignments that wanna our guest of honor here, do you have a do you do you have some one year predictions for

Simon Willison:

I've got the easiest one ever.

Bryan Cantrill:

Okay.

Simon Willison:

Think that there are still people out there who are convinced that LLMs cannot write good code.

Bryan Cantrill:

Oh boy, yeah.

Simon Willison:

Those people are in for a very nasty shock in 2026. I do not think it will be possible to get to the end of even the next three months while still holding on to that idea that the code they write is all junk and it's it's like any decent human programmer will write better code than they will.

Bryan Cantrill:

Yeah. That it will be a it not only will be mainstream, the idea that these that that LLMs can write effective code, it will effectively become a fringe belief that this can't happen.

Simon Willison:

That's exactly what I'm saying. And honestly, that's a gimme. I could say that one today. I think here's one that's AI adjacent. Okay.

Simon Willison:

I think this year is the year we're going to solve sandboxing. Right? The challenge we need, like, I want to run code other people have written on my computing devices without it destroying my computing devices if it's malicious or has bugs. We have so many technologies for this right now that are almost, almost something you can use by default. Like WebAssembly solves this kind of thing.

Simon Willison:

There's containers and all of that sort of stuff as well. I think we have to solve it. Yeah. It's crazy that it's 2026 and I will pip install random code and then execute it in a

Steve Klabnik:

way that it can steal all

Simon Willison:

of my data and delete all my files.

Bryan Cantrill:

Yeah. Yeah. Yeah. Interesting. Interesting.

Bryan Cantrill:

So you think that is so we are gonna have to the the presence or maybe this is not an AI related prediction, but we we have to actually meaningfully solve sandboxing problem.

Simon Willison:

I don't want to run a piece of code on any of my devices that somebody else wrote outside of sandbox ever again.

Bryan Cantrill:

Yeah. Interesting.

Simon Willison:

Why would I do that?

Bryan Cantrill:

Yeah. I mean it's kind of interesting because you know, people would talk about like, oh you know, I can't believe you're downloading this thing off the internet and piping it through you know, pseudo bash or what have you. And it always felt like, yeah, but I know that there's like a person that wrote that and I kinda trust this thing, now you're like, no no no, you can't. You're in this era now where, yeah, that's that's really interesting. Yeah.

Bryan Cantrill:

Good good one year predictions both. Do you any other other one years?

Simon Willison:

Oh, yeah. I've got one more.

Bryan Cantrill:

Oh, yeah. Go for it.

Simon Willison:

I think we're due a challenger disaster with respect to coding agent security.

Bryan Cantrill:

Okay.

Simon Willison:

This is based on this wonderful essay about the normalization of deviance. Have you heard this phrase before?

Bryan Cantrill:

Yes, yes.

Simon Willison:

This idea came out of the nineteen eighty six Challenger Disaster reports where if you have a culture, a corporate culture, whatever, that keeps on getting away with doing something that they shouldn't have been doing. Yeah. Cakes and getting with those lapses. The space battle keeps on launching and it's fine. Yeah.

Simon Willison:

That leads you into a sort of corporate culture level self full sense of security and it's going to burn you. Because I think so many people, myself included, running these coding agents practically as root, right? We're letting them do all of this stuff. And every time I do it, my computer doesn't get wiped. I'm like, oh, it's fine.

Simon Willison:

And I just keep on going like that. And I think it's going to add up. Think and I said this last year, I said last year, there's going to be a headline grabbing prompt injection security hold. There was not. Yeah.

Simon Willison:

I've been predicting this every six months, the past two

Bryan Cantrill:

and a

Simon Willison:

half years. This is my version of that prediction this year. I think we are to a challenger disaster scale thing caused by the fact that we all got away with these bad practices for so long and we got lazy.

Bryan Cantrill:

Okay. And so when you say challenge or disaster, presumably not loss of life and Right? Like

Simon Willison:

loss of property and loss of lots of financial things, loss of data, all of that kind of stuff. Cause the worst version of this is the worm, right? It's somebody coming up with a prompt injection worm, infects people's computers, adds itself to the Python or NPM packages that that person has access to, publishes itself into the package registries, gets pulled down again, all of that sort of thing. I think it's feasible that Yeah.

Bryan Cantrill:

Then to happen. So then the normalization of deviance is you think that in the wake of this, will be revealed that, oh, by the way, like internally, this was with the Challenger disaster, lots of people at the both the subcontractor that that that made the boosters at there there was lots of people who were aware about the o ring problem. Right. They they a lot of people knew that the the temperature sensitivity to the o rings, There there were engineers that were deeply mean, it's a real tragic story of the there's there's nothing more tragic than an engineer that is vindicated by their concerns when a when they are overruled with executive management and they are proven correct, that like that can leave people really broken in its way. Right.

Bryan Cantrill:

And it did in the Challenger disaster. So you're you you wonder or believe, predict that in the wake of this thing, we will take this apart and realize, oh, the people at this Frontier Model company, wherever this disaster took place, they were aware of it,

Simon Willison:

they knew this Yeah. Yeah. You just you shouldn't you shouldn't be running codex with dash dash YOLO Yeah. But we all did. You know?

Simon Willison:

Yeah. That's

Steve Klabnik:

So my guilty.

Simon Willison:

This year's prompt injection prediction is that one.

Bryan Cantrill:

Okay. Well, I'm gonna dovetail into your prediction from last year, and I'm just gonna predict again that a Pulitzer Prize winning journalist uses an LLM to research this story and report it. That the the report the inside of but yeah, that's that's a a a dire prediction. But I think that it it does feel like when also you look at you have these big accidents when we kinda collectively get over our skis, and we kind of like, we know that's possible. We don't think it's possible, and then it it happens.

Adam Leventhal:

Simon, I've got a I've got a book recommendation for you along those lines. It's called Drift into Failure. This is a book that Brian hates. But but but on this topic, I think.

Bryan Cantrill:

Oh, I see what you're doing. I see what you're doing. It's like, I I I'm not the only person here, sir, who hate reads. Let me introduce it. Let us talk about it.

Bryan Cantrill:

Okay. Yeah. Simon. Yeah. Simon Decker.

Bryan Cantrill:

Think it's Simon Decker. Right? I think it's the Sydney Decker. Sydney Decker. Excuse me.

Bryan Cantrill:

I don't want to disparage Simon's good name there. Sydney Decker. I don't like that book. But go ahead. You know, take one another one of Adam's recommendations.

Bryan Cantrill:

I mean, he's, you know, maybe he's he's three

Adam Leventhal:

You can see the trash he reads.

Bryan Cantrill:

Yeah. Exactly. The the the well, then that's a very, very interesting prediction. Adam, do you have one here?

Adam Leventhal:

I do. This one might might feel like too much of luck, but I think that the AI companies go on an absolute acquisition binge. And this

Steve Klabnik:

is

Adam Leventhal:

data infrastructure, ecommerce data, behavioral data, GPS data, anything that anything that is data or data adjacent, anything that is infrastructure infrastructure adjacent, and some shit that's just, like, hard to puzzle through. I remember when VMware bought Documentum, for example. It didn't make any sense. I think we're gonna see stuff like that. That is to say, just they've got so much money.

Adam Leventhal:

There are not enough chips to buy, not enough CPU and GPU hours to buy, and the money's gonna go somewhere and it goes to the weird acquisitions.

Bryan Cantrill:

Okay. This I I I shouldn't, like, I shouldn't dovetail under this or this is like they buy Iron Mountain.

Adam Leventhal:

Yeah. It's like, have you seen supermarket sweep? It's like that.

Bryan Cantrill:

Okay. But like, I mean, if they bought Iron Mountain, that could be if like if OpenAI announced that they're buying Iron Mountain, that could be potentially and they're like, oh, we're buying Iron Mountain. We're also ripping up your privacy agreements. We're gonna train on all of these salt mines filled with old old enterprise data.

Adam Leventhal:

Like any of any of the shredding companies, they buy them.

Bryan Cantrill:

They're shredding companies, they buy them. Yeah. I know. Oh, they buy like garbage companies. Okay.

Bryan Cantrill:

They they buy okay. Yeah. They I I anything that is a plausible source of data, they buy. I like this.

Adam Leventhal:

They're they're looking for wastewater, DNA samples, whatever. Like, anything anything that is construable as data, they buy it.

Bryan Cantrill:

Did they buy an entire, like, town to see what they like, do we're gonna see which is more valuable, like the wastewater treatment plant or the library, the town library. We're gonna that. We're buying city hall. That's got records. We wanna consume all that.

Bryan Cantrill:

There's they just all manner of data.

Adam Leventhal:

I I think that is not implausible that they're like, look. We know that these records going back to 1850 are all printed on paper. We can buy the town and just, like, read all the books and use that as a corpus. Yes.

Simon Willison:

You know, local newspapers are very cheap these days.

Bryan Cantrill:

Oh, that's a good one. Yeah.

Simon Willison:

Have a 150 of archives.

Bryan Cantrill:

Yeah. Okay. So the a big target painted on anything that has data. Of any kind. Yes.

Bryan Cantrill:

Alright. Well, I am gonna make and we can kinda also back because I'm sure you've a lot of one years. I got a lot of one years too so can kinda ping pong back and forth and Steve can hop in here too with any one years. I am going to Adam, in a in a classic heart v head, a a dramaturgical dyad as old as time. My heart is going to predict.

Bryan Cantrill:

And actually a little bit of my head. My head is not which which is really bad. Somewhere in my heart and head agree that's really really bad news. I think that vibe coding, which entered the lexicon in February, is more or less out of the lexicon a year from now. And I think that it is it's used pejoratively.

Bryan Cantrill:

And I think that I mean, just as Simon mentioned, no doubt that LLM assisted and authored code is here to say. But we are gonna enter a new age of rigor with respect to that. And it's gonna be viewed much more as a tool and much less of a just like, hey, go build whatever you want and the and even when you go the the thing that is currently and Simon you had a good piece about how the the term vibe coding is kind of been misconstrued as it is. That it is not actually kind of inconsistent with with Kapathy's original

Simon Willison:

It's the problem with Kapathy's original tweet is that it was a long tweet. It was a lot longer than a 140 characters.

Bryan Cantrill:

Right. You could see more. Yeah.

Simon Willison:

Very few people made it to the end of the tweet and understood what he was trying to say. It was a little bit too vague. Like he was talking about, it's throwaway prototypes. You don't even look at the code. You just ride the vibes and see what happens.

Simon Willison:

Right. And a lot of people interpret that as, oh, it's using AI to write code for you, which I think is a bad definition because then it becomes useless. Like a couple of years old code will be written with some level of AI assistance. I think having a distinction we say, no, Vibe coded is, didn't review it, just sort of threw it in there and saw what happened. That's kind of useful now.

Simon Willison:

Is it still useful in a couple of years even then, right?

Bryan Cantrill:

Yeah. And I think that that will be I think that the the term vibe coding will be sullied enough that you will use a different term to describe something that are like, oh, like I use this to create a prototype. Whatever that kind of rapid prototyping is, it will have a different it's like, no. Of course, didn't vibe code it. No, please.

Bryan Cantrill:

That's so 2025. I would Brian, I gotta I gotta put this on I gotta put this

Adam Leventhal:

on record just because when we listen back to this in a year, and you're right. This is gonna feel juicier, but I think you're out of your mind. Just wanna put that on the record. I think I think it is such yeah. You're welcome.

Bryan Cantrill:

I think it

Adam Leventhal:

is such a it is it's such a tantalizing attractive term. And that's why as as, know, Simon, I was I was reading a book with the title vibe coding. I I don't know what's wrong with me in terms of my book selections. Four for four, baby.

Simon Willison:

I read it.

Steve Klabnik:

I read

Adam Leventhal:

it, actually. And, yeah, and and Jean Kim. And, Simon, I I stumbled onto your blog post where you're like, look. There are two there are three authors and two publishers, all of whom apparently don't know what the term means. So I think it is it is such a juicy term that people wanna co opt

Simon Willison:

blog entry caused one of the books to rename itself. There were two there were two vibe coding books. One of them renamed itself to Beyond Vibe Coding.

Bryan Cantrill:

Oh. Oh, did it, Simon? Oh, how interesting. Isn't that interesting, Adam, that they renamed it for vibe coding? Interesting.

Bryan Cantrill:

That's that's the now Beyond Vibe Coding. See, that's what saying. It's gonna be Beyond Vibe Coding. It's gonna be something think the term vibe coding is gonna be Adam I I first of all, thank you for saying that I'm out of my mind. I definitely appreciate that.

Bryan Cantrill:

The because I may well be. But I I mean you're right in that like it feels like because it can be anything it's just too tantalizing to not use. But I think it's gonna get a bad name for itself. So When you're right,

Adam Leventhal:

we know, like, that like, how how much I disagreed and how right you were.

Steve Klabnik:

I I went back and forth on this because I sort of had the same thought at first when Brian said this, but then I think I agreed with him more as he went along. The thing is is that vibe coding is too good of a term for both the haters and the people who like it. Like, it's just too attractive, I think, just, like, as a as a concept. And so I feel like it's already sullied to many people, but the people are still using it because it's also just such a good term. Even though it also sucks and the definition is bad and people get can't even agree on what they define, like, what they use to mean.

Steve Klabnik:

But, like

Simon Willison:

I did try I have been trying out the idea that I vibed this up. Like, didn't vibe code it. I vibed it.

Bryan Cantrill:

You vibed it.

Simon Willison:

I vibed it, and my wife is like, no.

Bryan Cantrill:

No. Okay. And you know what? And I would just let let me just say on the record that if we refer to things as I vibed it, I'm taking zero credit for that, Adam. So vibe coding is a is out of the lexicon because we have replaced it with something that's even cringier, then yes.

Bryan Cantrill:

I I I'll take I'll take zero credit for that. But I I do think that it will be so time shall tell.

Steve Klabnik:

My one my one year here is, like, very similar in the sense of I think one of things I like about doing this is that you go back and you see what especially in one year is I think it's like, what was I thinking about at the time? Right? Like, I haven't thought about congestion pricing in, like, six months, basically. And then now I'm like, oh, yeah. I was really interested in that a year ago.

Steve Klabnik:

And so I decided to pick the thing that I'm really kinda intrigued about right this second, and maybe I won't even care about two months from now, which is agent orchestration will still be a hot topic in a week partially, but not entirely solved.

Bryan Cantrill:

You're need to you're we're gonna need to get to we're gonna peg you down to a more specific prediction. That's that that one's a little too easy to claim credit on. So you're gonna have to, like, get something give us something concrete.

Steve Klabnik:

Alright.

Steve Klabnik:

Some well, see, the problem is in the quantity. Like, I think that some people will have success with this technique, but not enough people. Like, it's kind of like a it's still a thing that people are gonna be pursuing, but it's not going to be a thing that, like, is as normal as agents have gotten in the past year. I don't think figuring out how to make them work together is going to be a thing that is going to be as clearly a win.

Bryan Cantrill:

Okay. So how are you gonna know if this prediction's right?

Steve Klabnik:

That's the problem with is the quantification of what that means specifically. Yeah. Yeah. I'll think about it. But that's, like, kinda where I think this is an interesting topic.

Steve Klabnik:

Going.

Bryan Cantrill:

As Yeah. Okay.

Steve Klabnik:

Yeah. I've, like like, I personally top out at three to four Claude sessions, and that's, like, it. And that's, like, an upper level on my velocity doing development. And that's why think people are trying to, like, solve this problem because if you can scale up past that, then one person can have, like, much bigger impact. But it's also, like, a really hard thing, and people are doing totally insane things.

Steve Klabnik:

Like, Gastown from Yegi is, like, a fever dream of a thing that's, like, ridiculous. But I think people will still be interested in this topic and are working on it as a thing because it's how you scale up.

Bryan Cantrill:

Okay. But this will not be mainstream to have more more agents than siblings with or it's like Maybe is the

Steve Klabnik:

way to put it.

Bryan Cantrill:

We're not gonna have reports. Yeah.

Steve Klabnik:

We're not gonna have a Kubernetes for agents that's, like, as solidified as that. Right? Where, like, people are just like, okay. Kubernetes is just like the default, like, Kleenex. You know?

Steve Klabnik:

Like, I don't think we're gonna have a a framework or a tool that is ubiquitously the way that everybody organizes their agents.

Bryan Cantrill:

Okay. Alright. That feels yeah. Sorry to sorry to get you No. It's good.

Bryan Cantrill:

Yeah. Listen. I if if we're not to the point where Adam is saying that you're out of your mind, we're just not at a good production. Mean, that's that's that's really what we're really trying to do here. That oh, that it's good.

Bryan Cantrill:

Adam, do you have a other I've I've got a couple more here.

Adam Leventhal:

I I have one more one year, but I feel like it might be a too ambitious. I think this is the year we see LLMs have a programming language, which is not human intelligible. That there is a programming language by and for LLMs.

Bryan Cantrill:

Okay. So that this is a this is, like, at runes. This this is indecentable.

Adam Leventhal:

Yeah. This is, like, not really intended for humans to understand, but it is more efficient for the LLMs to program it. Like, there's there there's already some some papers, and maybe, Simon, you can fill in the details here, but where LLMs reasoning not in human languages like like English or in in DeepSeek's case in in Chinese, but in in sort of, like, Their own tokenized languages are more efficient. So something like that.

Bryan Cantrill:

Yeah. That that would be you know, I I already find it to be slightly off putting and also, like, delightfully off putting, you know, when when these things show their work, especially because and Adam, we talked

Simon Willison:

about this

Bryan Cantrill:

in our DeepSeek episode with with the Cerebras folks, we're watching DeepSeek, like, kinda have, like, a nervous breakdown as it's trying to answer your question. And then, like, occasionally, like, lapse into Chinese, come back.

Simon Willison:

But the Chinese thing, like, have you had your own laptop run a model that thinks in Chinese yet? Because that's beautiful. Yeah. It's so cool when that happens.

Bryan Cantrill:

And is and so but, Adam, you think this is gonna happen for a non natural language? It'll be a synthetic language that they will they

Adam Leventhal:

want That's right. To A synthetic programming synthetic programming language that is easier for them to work in.

Simon Willison:

Okay. The interesting thing about that one is that the labs are trying to stop that from happening just from the interpretability point of view.

Simon Willison:

Like if you look at all

Simon Willison:

of the interpretability research, the whole point of that is we really want to know what they're thinking because we don't want them going dark on us.

Bryan Cantrill:

Interpretability, safety and so on. Yeah, yeah, yeah, explainability. Yeah, that you will be So maybe there there will be a tension where this thing is trying to invent the synthetic language and it's constantly being reprimanded by its frontier model overlords.

Adam Leventhal:

Yeah. Maybe I'm overly influenced by my reading list.

Bryan Cantrill:

Okay. So I I think that the one of my several one of your predictions, I I think that AI has created some real public perception problems for itself. And I think that you are gonna have one of the frontier model companies this year have a white paper explaining how the proliferation of AI will mean prosperity for everybody. So they that there will be trying to make some economic model, some economic argument because I think that in in maybe this kind of dovetails my other picture that this is gonna be a a twenty twenty six election issue is going to be how we think of these things and how they how they are regulated and it's a big mess. And the there's there's there's more heat than light on this debate, would say.

Simon Willison:

I'd like to tag something on I that think that only works if they can sort of if they can wash that through existing trusted experts like Exactly. Sam Dario, they're constantly publishing essays that try and make stuff. Nobody believes the word Notivism. That's right. Yeah.

Simon Willison:

Barack Obama's signature on one of these position papers Yes. And maybe you've got something people might start to trust a little bit.

Adam Leventhal:

Otherwise, it's just like leaded gas is good for you says Exxon.

Bryan Cantrill:

That that that's right. No. Right. So that yeah. They get someone who and and whether that's that that person is kind of I hope it's not.

Bryan Cantrill:

I mean, yeah. God. Obama. It would be so wait wait yeah okay let's go with that that's a great one because like look if it's like if it's Bill Clinton everyone's gonna kind of roll their eyes so it's gotta be it's gotta be so someone who's got real credibility saying that this is gonna be broad based. Will say also if they get that person to do it, it's gonna be revealed that that's also a bit crooked.

Simon Willison:

How about the pope? The pope? You oh. I Oh. That note

Bryan Cantrill:

is very into The pope is very into this stuff. I God this is okay. That's a great prediction. We've hit pay dirt. The the Pope weighing in on LLMs and And their economic impact.

Bryan Cantrill:

And their economic impact in the world. I Simon, I'm giving you full credit if the pope weighs in believing that this is gonna be economic devastation. I just think if the pope weighs in on LLMs in a public way, Simon, you are a prophet. I mean, you're already a prophet in our eyes anyway, but that that that's He's

Steve Klabnik:

already he's already talked about he's already talked about LMs.

Bryan Cantrill:

The Wait. What does he say about LMs?

Simon Willison:

Think he has. Yeah.

Steve Klabnik:

He said, like, he said, like, you need to make sure that when you're using tools that you, like, use them in a way that's, like, good for humanity and not bad or some something like it was, like, a very, like, not pro, but not, like, super anti, but it was, like, a little anti, if I remember correctly.

Simon Willison:

I think the the the the even I think the previous pope, there was something relating to AI. There was one of those Catholic proclamations with a bunch of like sub footnotes and things

Bryan Cantrill:

Well, but we're years ago. We're talking about the pope going like going big on LLMs one way or the other. With this is the more than just like, hey, this is a think

Simon Willison:

it's a bit of a safe bet, actually.

Bryan Cantrill:

Yeah. I think it's good. I like it. I think it's it's it's definitely interesting. I also do think and I I have been debating whether to make this a one year or a three year, but I'm gonna I'm gonna go ahead and, Adam if you thought I was out of my mind on on on my vibe coding prediction maybe you're really gonna say I'm I'm out of my mind on on this.

Bryan Cantrill:

I I do so I I like a lot of people I've been having in increasingly intense .com boom flashbacks and in particular the thing that is killing me is like the kind of capitulation to the never ending boom. And that was the that was the last stage of the .com boom was the capitulation which happened I would say in late ninety nine early two thousand where everyone's like you know what it is gonna be wow I'm just gonna like join the madness and yes I know it's madness but because everyone did know it was madness when it corrected or corrected really quickly. I think that we are gonna get in the first stage of that and I think that the first stage of that this coming year is going to be some of these companies that I think are ultimately gonna be a feature of the frontier models that are independent companies. And so I hate to pick on them because I don't I don't wanna well, it is what it is. I guess actually you've you've already like thrown three different authors.

Bryan Cantrill:

You've thrown three different authors under the bus and I threw a fourth under the bus. So why am I why do I care?

Adam Leventhal:

Do you forget our our our goal for this year for getting a c and d? Like why are you not doing

Bryan Cantrill:

your part? I I I it's never too early to get working on our our one year OKRs of getting a c and d. Yes. Okay. Fine.

Bryan Cantrill:

Harvey. I'm gonna call him out. So Harvey is the this, have you heard of Harvey, Adam? No. Oh my god.

Bryan Cantrill:

Okay. So Harvey is a a variant of LLMs that is aimed at the legal profession. Right? It's aimed at like to to assist lawyers. Maybe to be an automatic lawyer unclear, but it is designed to be LLMs for lawyers.

Bryan Cantrill:

It has an $8,000,000,000 valuation right now. They have raised an absolute mountain of capital unlike in the .com boom. With .com boom, these companies were all public. So when they when they kind of fell apart everyone knew they fell apart because they were public. I think that you're gonna have these some of these companies that are private who've raised a ton of money, they're gonna kind of do a Clubhouse That where clubhouse raised a ton of money and then just kind of like quietly you know, I mean I don't know what they trickled it down.

Bryan Cantrill:

I mean, you recall that clubhouse raised a huge amount of capital and as I don't think we really talk about clubhouse very much anymore. I think that we're gonna have this same effect on some of these companies. Open evidence, I'm I'm I'm I'm less convinced about open evidence is aimed at docs, but Harvey I think is just gonna be emblem. I think Harvey is the pets.com of a coming AI correction where where a Harvey's gonna bust out and everyone's gonna be like, no no, we knew that one was crazy. And so in in now this is not gonna be a in a full on AI bust, don't think, But I think in a year we will have some and that they'll have a there'll be a different nomenclature and Adam because this is one of those things and I know you remember this.

Bryan Cantrill:

Remember when we called it the correction and not the bust? Mhmm. Was this very brief period from April 2000 to November 2000 where we called it the correction where pets.com had had blown up and a bunch of these others had blown up. But not Sun, not Cisco because you know, we're the picks and shovels and all this other kind of like nonsense that we told one another. And then you realize like, no no, it's not a correction, it's a bust.

Simon Willison:

I think

Bryan Cantrill:

that we will have a different kind of name. You know, this will be the rationalization, the focusing, the sharpening, who knows what it'll but it'll be called something that says that it was like, no. That Harvey was clearly insane, but these other companies are not insane. That's not

Adam Leventhal:

When Harvey AI acquires MoFO, who wins? Like an AOL

Bryan Cantrill:

Oh, totally. Oh my god. What a great parlay that Harvey just starts flat out acquiring law firms, which would is totally plausible by the way. That is like the that is your AOL Time Time Warner is the the the the Harvey Morrison Forrester, the the or the Harvey Wilson Suncini. Mean, or or or, like, like, why pick at that kind of valuation?

Bryan Cantrill:

They could just buy them all. They just buy all law firms. You know, maybe that's what they

Adam Leventhal:

Yeah. They are the law.

Bryan Cantrill:

Yes. They they are the law. So yeah. That is my that is my one year picture. So I do I I I do think that we're gonna begin to get the I I think things have gotten it's just gotten too because the the the fear of any kind of bust seems to be gone and that's the moment to really dance close to the door as they say.

Bryan Cantrill:

Love it. So we got some big big big IPOs happening potentially this year. And I I don't know Adam, if you got any thoughts. So you you got SpaceX, OpenAI, Anthropic all potentially going to trying to get out, trying to to IPO. I think we are going to have one of those s ones is going to be disconcerting.

Bryan Cantrill:

And I that it's it's gonna show that the the economic models of one of these companies is much more strained than people realized.

Simon Willison:

See.

Bryan Cantrill:

So we get one s one.

Adam Leventhal:

Vomits on it, and we don't see any more s ones.

Bryan Cantrill:

I don't know if we don't see any more or I I don't know if we do or don't see any more. I don't know. But I think that the like, that you're gonna have an an s one that is extremely because like the the like I'm thinking of the the WeWork s one in particular. Like the WeWork s one ended up having a real blast radius, if you remember that. Where it was really revealed that like, oh, this is not a good business that WeWork is in.

Bryan Cantrill:

And WeWork was all sorts of shenanigans and I think that that we will see some kinds of shenanigans in one of these big s ones. It's my that I

Adam Leventhal:

love it.

Bryan Cantrill:

That is my prediction. But the I I also I I you know what? I'm just gonna say it even though like this is a dumb prediction. I think that what so the I think the SpaceX s one damages either Tesla or XAI. So I think that that the SpaceX s one reveals something where you I mean, in particular, like to my three year prediction of last year that the Cybertrucks no longer made.

Bryan Cantrill:

SpaceX is infamously buying like lots and lots and lots of Cybertrucks. And I I I hope to hell that this is somehow above the bar required to be in the s one to reveal how many Cybertrucks they've actually bought. But that's the kind of thing I'm talking about.

Adam Leventhal:

Just the like one hand washing the other of the Elon enter you know, enterprises.

Bryan Cantrill:

That that's right. That's right. So that that is my that's my other my other one year prediction.

Adam Leventhal:

Good.

Bryan Cantrill:

We'll see. So oh, and then I've got one other. Then I'm sorry. I'm I'm really I'm really just dropped a lot on your predictions. I think that we're gonna see a a real problem with AI induced on Wii among where software engineers in particular get listless because the AI can do anything.

Bryan Cantrill:

Simon, yeah, what do you think about that?

Simon Willison:

Definitely. I mean, yeah, like like anyone who's paying close attention to coding agents is feeling some of that already. Like there's an extent where you sort of get over it when you realize that you're still useful, even though your ability to memorize the syntax of programming languages is completely irrelevant now. Yeah, I don't know, I mean something I see a lot of is there are people out there who are having existential crises, and are very, very unhappy because they're like, I dedicated my career to learning this thing, and now it just does it. What am I even for?

Simon Willison:

And I will very happily try and convince those people that they are for a whole bunch of things and that none of that experience they've accumulated is gone to waste in some, but yeah, no, it's psychologically it's a difficult time for software engineers.

Bryan Cantrill:

And do you think that we had a name? Sorry, Steve go ahead.

Steve Klabnik:

Had a lobster situation where somebody like was, like, borderline suicidal because of being upset about the fact that their, like, life skills was no longer going to matter anymore. And it was like it became like a community problem because yeah. So, like, it's it's definitely happening

Adam Leventhal:

for sure.

Bryan Cantrill:

Okay. So I'm gonna predict that we name that. Whatever that is, we will, like Yeah. We have a name for that kind of feeling that with the the and that that kind of whether you wanna call it a blueness or a loss of purpose, and that we're kind of trying to address it collectively in a directed way. K.

Adam Leventhal:

This is your big moment. This is your big moment. Pick the name. If you call your shot from here, this is this is you pointing to the stands.

Bryan Cantrill:

You know, I

Adam Leventhal:

Like, deep blue. You know?

Bryan Cantrill:

Yeah. Deep blue. Deep blue. I like that. I like deep blue.

Bryan Cantrill:

Deep blue. Good. Oh, did you walk me into that, you bastard? You you just blew out the candles of my birthday cake. Yours was was it my big moment at all?

Bryan Cantrill:

That was your big moment. No. That is Adam, that is very good. That is Deep Blue is very God. All of the

Simon Willison:

the chess players and the Go players I know. Went through this a decade ago, and they have come out stronger.

Bryan Cantrill:

Yeah. It is Deep Blue. Jesus Christ, Adam. You scare me sometimes, man.

Adam Leventhal:

There's a reason

Adam Leventhal:

that you bring me

Adam Leventhal:

to this thing.

Bryan Cantrill:

There's a reason. Wait, wait, wait, just tell you. There's a reason. Like, hey, Sometimes it's you know what this web three is coming back and by the way did I tell you this other book that I hate reading for the third time, man every once in a while you really Okay really send it out of the I need to throw in

Simon Willison:

a positive prediction. Yeah. But it's not an AI prediction. This is a one year. I think that Kakapo parrots in New Zealand are going to have an outstanding breeding season.

Simon Willison:

The reason I think this is that the Rimu trees are in fruit right now. Okay. The Kakapo parrot, there's only two sixty of them

Adam Leventhal:

Okay.

Simon Willison:

They only breed if the Rimu trees have a good fruiting. The Rimu trees have been terrible since 2019, but this year the Rimu trees were all blooming. There are researchers saying that all 87 females of breeding age might lay an egg. And for an egg species with only two fifty remaining parrots, these are great parrots.

Bryan Cantrill:

Okay, so you know, love this because I think, and I'm gonna elaborate on this and be like, this is something humanity wants, like this becomes something that people like, it's like the the condors on Silicon Valley. The the the like everyone wants this a feel good story during a difficult age.

Simon Willison:

It's it's it's the perfect it's the only positive news I've heard the past three months. It's so good. Because if you've never heard of a kappapo, go and look them up. Yeah. Big dumpy green flightless parrots.

Simon Willison:

They're super charismatic. We need more kappapo.

Bryan Cantrill:

This is like the miracle on ice in 1980. This the thing that in a difficult time, this is what gives people hope that positive things can happen. Yep. And I I love it. That's great.

Bryan Cantrill:

That is a very positive prediction. And I wanna go yeah. I need like some webcam set up so we can like watch the eggs hatch and and everything It

Simon Willison:

does exist. The Kapoor teams have very good online presence. That is awesome.

Bryan Cantrill:

And it does have you should know that there's someone in the chat saying, hey, I'm in New Zealand. This guy's right. He's he so it's like the kiwis know that they he's like, you know, finally, some guy they they finally have a guest on this podcast that really gets it. It. So that's that is that's a good one.

Adam Leventhal:

I hope someone just got bingo.

Bryan Cantrill:

That's right. Alright. Are we on to three years? If I I've I've exhausted. Yeah.

Bryan Cantrill:

Alright. Three years. Let's let's let's do some three years.

Adam Leventhal:

Why don't you start, Brian? You you bring a a big bag of predictions.

Bryan Cantrill:

Yeah. I've I've I've got okay. So I think that that in three years will be a I think it's not gonna happen. I just don't think it's gonna happen the next year. I think it'll but I think it is gonna happen.

Bryan Cantrill:

A massive pivot away delineation between AGI and ASI and realizing it look, the whole idea of AGI is politically it is a dead letter. It is not something that is for for a democracy and Simon you said this last year about not wanting to live in a world where people didn't have work. Right? Right. People don't want to live in a work with a world where there's not work.

Bryan Cantrill:

They did they really don't. Work is very important to people sense of meaning and any kind of claim that like we've we've built this kind of super intelligence and nobody needs to work again, I think is gonna be really resisted. And I I think it's also it's also helpful that it's not, my personal opinion, not true and so I think you're gonna get a lot of a the AGI is gonna be the thing that we already have and oh no, ASI is the thing you're worried about. Well no no, we're not doing ASI, who told you that? No no no no no.

Bryan Cantrill:

We are, our mission is to build the AGI. Good news we already did that with chat GPT 5.2 already was AGI. So I had I I think that's gonna be in the next in the next three years they're gonna stop talking about AGI as this kind of thing in the future that I could talk about as something that's already done, but then super intelligence is going to go away as an aspiration. Simon, what do think?

Simon Willison:

I love this prediction. The one thing that worries me is its valuations, right? The AI companies with the giant valuations, the only way you justify those valuations is if it represents the total addressable market is all human labor. And what are they going to do? How do they dial their expectations back and not sort of invert the reason for their company existing?

Bryan Cantrill:

Well, think that this is gonna be part of the AI bust. So I think I think in three years we are we will see. And again, mean, there's no doubt that the frontier models have tremendous there's there's tremendous value here. There's no doubt about that. But I think we will boiled off a lot and I think that the we will be really looking at these things as tools in in three years.

Simon Willison:

That would be wonderful, wouldn't it? It would be wonderful. This is your utopian prediction.

Bryan Cantrill:

This is my utopian prediction. This is. That like look the the like the parrots have the the, you know, the the the the cacapoo parrots have their extraordinary breeding season and that, like, humans have jobs. That's like those are those are the the the two feel good stories.

Adam Leventhal:

In fact, there's so many parrots that people have to just, like, domesticate them suddenly. That's right. New jobs.

Bryan Cantrill:

So that is among my three year predictions. Simon, you what what are your what what are your three years? What's

Simon Willison:

got one that's semi related. We will find out if the Jevon's paradox saves our careers or not.

Bryan Cantrill:

Oh, there you go. Yeah. This is

Simon Willison:

a big question that anyone who's a software engineer has right now is we are driving the cost of actually producing working code down Yes. To a fraction of what it used to cost. Does that mean that our careers are completely devalued and we all have to learn to live on a tenth of our incomes, or does it mean that the demand for software, for custom software goes up by a factor of 10, and now our skills are even more valuable because you can hire me and I can build you 10 times the software I used to be able to so I'm more valuable to you. And I think within I think by three years we will know for sure which way that one went.

Bryan Cantrill:

Yeah and so to give people contact about the Jevons paradox. The Jevons paradox is a nineteenth century due to a Scottish economist and I observed that as coal was becoming cheaper, more of it was being used. And that that would and that was a paradox that it like why is it why are we and the reason we were using so much more of it is because we are finding new uses for it. And the question is, the Jevons paradox for for software engineering would be, as this becomes much cheaper, do we do much more of it? Right.

Bryan Cantrill:

So we're not putting people out of work because there's actually much more of it to do. And thing that is interesting about Jevons is that Jevons was that paper is called the coal problem because Jevons was not incorrectly very worried about running out of coal. And what did not foresee at all was of course the discovery of petroleum and solving the coal problem in a completely different way. So it'd be interesting to know if if we end up but yeah. So you think in so in three years, we're gonna know that.

Simon Willison:

I think we will know for certain. We'll be like, okay. This is how it played out.

Bryan Cantrill:

Yes. Yeah. Yeah. One thing I love about

Steve Klabnik:

the Jevons paradox is that, Brian, you're the first person I've ever heard cited. And then in the years since I've heard you cited, it's been cited increasingly more often. Like, I feel like I see people reference the Jevons paradox, like, once every three months now when I'd, like, never heard of it five years ago.

Bryan Cantrill:

Yeah. So, you know, I Steve, bless you for saying that. I, I whether it's, you know, whether Adam is putting you up to it or not, be like watching him chomp down on this. He won't question this at all. He's like, you know, this guy loves the the the of these LLMs.

Bryan Cantrill:

You just give him a, you know, I feel like I I refer to the Jevons paradox actually in a a keynote like like nine years ago. And I'm like, I feel but I I must have I mean, obviously, it's like I mean, it's like it's from the nineteenth century. It's like I clearly can't claim how much credit for it. So anyway. But but thank you for

Steve Klabnik:

saying that. Simon also did his three year was like what I was trying to get at, but I couldn't figure out how to say it, and I said something that was much worse. So mine ended up being like using AI tools and writing software professionally is gonna be considered something closer to autocomplete or syntax highlighting than something controversial or exceptional. And I was trying to like I originally had something in there about Yeah. Industry is gonna figure out our existential crisis around these tools, and it's just gonna be, like, one way or the other.

Steve Klabnik:

But I couldn't, like, figure out how to put it. So, like, I was second. I think it was very well said, Simon. Yeah.

Bryan Cantrill:

Well and so I think that that we and and Simon, I think it's a it's a very good observation. I do think in dovetails into another three year prediction that I've got which is that the we see much more custom built software and much less SaaS. So you get a lot of LLM generated or assisted software that's running effectively custom software. So you're developing software to put in production for yourself and you kind of care less about the stuff that's like well you know, yes that are maybe things that you would care about if you made this available as a service to the internet. Right.

Bryan Cantrill:

Which I actually don't care because I actually am I and because one of the things that that I mean when you when people consume software as a service, especially like the more niche it gets, the more important it becomes to your business and then the easier it is to have a real disconnect with your software provider. And I mean, Steve, the when I I Oxide, you were very much on the front lines of us replacing SaaS software with software that Steve wrote that was I mean, Steve, you were LLM assisted. Right? You're making heavy

Steve Klabnik:

use to write and then eventually Claude wrote all of it. Like, it was very much like I started this before I even thought AI tools were good. And then by the end, Claude was doing a lot of work. I think well, right before I left, we looked at it, and, like, my personal AI usage was, like, the same as the rest of the company at the time or something like that. I think I the bill or whatever.

Steve Klabnik:

And I think it's gotten it seems like y'all have used it even more since I've left. But like yeah. Absolutely. I think this is a definitely a huge thing. There I have several personal projects that are effectively just replacing, you know, SaaS tools with things that are bespoke for people, and it's it's great, honestly.

Bryan Cantrill:

We because we're gonna get you're gonna get like, hey. My my this my SaaS vendor, they're they're charging me too much money, or you get like the case we had. Like, actually, I would gladly pay more money if you delivered us actually the software that we actually need and in this case it was for PLM, product lifecycle management. But the the you know, you get these kind of esoteric mean, not the esoteric is is too strong, but these things that are very important to the way an organization operates that your software vendor just doesn't they don't care about your software as much as you do. I mean, this is what you know that old adage that no one cares about your money like you do.

Bryan Cantrill:

Nobody cares about your software like you do and I think that the ability to custom to build custom software and I think by the way this is gonna be a real source of for you, we're gonna have a lot of young people that thought they were gonna be working for Google and Meta and and so on that are maybe not going to be. And they may instead be working in the kind of more mainstream economy writing software using LLMs to write software that's very relevant to Yes. You know.

Simon Willison:

Yeah. This ties back to something I talked about earlier, the sandboxing thing.

Bryan Cantrill:

Yes.

Simon Willison:

If you want basically, if you want your SaaS to stay relevant, you need to embrace plug ins and extensions where your customers can customize it in all sorts of interesting new ways. Yeah. The way to do that is with a sandbox where they can write code that can safely interoperate within your platform and not delete everything, but all of that kind of stuff. This is the kind of thing which used to be really difficult to build. Like Shopify built this a few years ago.

Simon Willison:

Right. The Shopify functions, but very few other companies have done it. I think a lot of companies are going to start doing exactly that.

Bryan Cantrill:

Yeah. Interesting.

Simon Willison:

There are

Steve Klabnik:

a ton of like industries that are normie industries where there is like 10 consultancies that make shitty software that professionals use because those are the only 10 companies that know their Like my girlfriend's a real estate agent. And when I look at the tools and the SaaS tools that are useful for her, they're all garbage. And I've been using Claude to build her website instead. And it's like way cheaper to just like pay the upstream for, you know, MLS for the data feed and then just, you know, have your own thing done. And it's like way nicer and way cheaper Woah.

Steve Klabnik:

Because like and I think it's just so many industries that have very similar kinds of things where there's like the software that's made for professionals is just bad, actually.

Simon Willison:

The most successful implementation of this patent of all time is Salesforce, Salesforce, incredibly customizable. Dreamforce in San Francisco, 50,000 people attending it, they're all professional Salesforce customizers. So that pattern absolutely works. It's just it's really hard to build, which is why few companies other than Salesforce have built something with that pattern that's not successful.

Bryan Cantrill:

Yeah, interesting. And yes, maybe Salesforce ends up being the kind of the the victim of that, of people being able to build this stuff easily on their own. Adam, do you have a a three year?

Adam Leventhal:

He actually tacks into a similar theme. I I was thinking I think we're all thinking along the same lines in this three year horizon. And I've been thinking about some of the observations we've made in the past about about standing on the shoulders of giants, about how all of this software is enabled by all the software that came before it. And, you know, I remember when we looked back at, what's that? That Microsoft the showstopper book about

Adam Leventhal:

the development of

Adam Leventhal:

NT, you know, of

Simon Willison:

us Yeah. Yeah.

Adam Leventhal:

Of seeing that as as really maybe one of the last isolated systems, like systems that are not kind of participating in this larger open source network effect kind of thing. But I realized that LLMs, like, a benefit from open source without necessarily needing to use it directly. They benefit from all all of it being out there. So I struggled to figure out how to phrase that in terms of, like, this kind of concept of, like, everyone's gonna build their own software. You don't need to use open source software.

Adam Leventhal:

You can just build your own. So I kinda set that aside, but instead my prediction is that we get a crisis of AI slop open source. So contributions, projects that, like, creates that IO is just inundated with this AI slop open source library, and it becomes indecipherable.

Bryan Cantrill:

And so does it does this how does this affect open source in the large? Does this make open source less tenable? I mean, is there did these two trends combine to make people want like, is this a Yeah. The delay I had

Adam Leventhal:

there that I hesitated to make is that it makes proprietary software more attractive because you have a brand behind it, a person behind it, a a throat to choke as it were behind it, where, you know, it's not you have some providence associated with it. You have some quality associated with it. You know it's not malware, and it helps sift through this this AI slop onslaught.

Steve Klabnik:

And and organics movement, but for software. It's like this is a certified human written code because, you know

Bryan Cantrill:

Oh, yeah. Absolutely. Like the non GMO repo. Absolutely. Yeah.

Bryan Cantrill:

Definitely. I I I no. Would and so I I think that the and you wonder because clearly you need these foundational things though to be open source in order for this whole thing. Python has to be open source for this whole thing to work. Right?

Bryan Cantrill:

You you need to have these kind of foundational things that are that are open source. So but it's maybe these fur further or or do you think that that even those things, do we are do do we see a return to proprietary programming languages? Although I guess actually we're using the rooms that the LLMs have invented for themselves. So

Adam Leventhal:

That's right. That's right. It's a good question about programming languages, but I I do think you see, like, the the value of proprietary software or perhaps just, like, paid software may maybe still open but licensed is getting provenance and the sort of ancillary benefits that often come with paying for something.

Bryan Cantrill:

Interesting.

Simon Willison:

I've got a new three year one.

Bryan Cantrill:

Yeah. Yeah. I

Simon Willison:

think somebody will have built a web a full web browser mostly using AI assistance, and it won't even be surprising.

Bryan Cantrill:

Oh, interesting. Is is that a big complicated system? Yes. We will have

Simon Willison:

Notoriously comp like rolling a new web browser is one of the most complicated software projects I can imagine. Yeah. And specifically the reason I think that's going to work is it turns out one of the most effective ways of using a coding agent is to give it an existing test suite and tell it write code that passes these tests. And in the past three weeks, I've done that for an HTML five parser library. I span up a brand new implementation of HTML five parser that passed the 9,200 HTML five conformance tests.

Simon Willison:

Yeah. And I did it for a JavaScript interpreter. Like I've written a naughty little Python JavaScript interpreter that passes the micro quick JS test suite, and it wasn't very hard because once it's got a test suite, it just keeps on plugging wed until all the tests pass. I think the browser specs are nearly at a point where a lot of these things, there are conformance suites, right, there's the CSS conformance suites, there's all of this stuff. Honestly, today, you could start one of these coding agents working on this problem and it would make a surprisingly decent amount of progress.

Simon Willison:

Three years time, I think it's gonna be easy. I think they'll be able to it.

Bryan Cantrill:

Yeah. That's when that I mean, that would be interesting. Right? If you can build a system that is that sophisticated.

Simon Willison:

But it's the cheat the cheat code is the conformance suites. If there are existing tests

Bryan Cantrill:

Yeah. That you can point

Simon Willison:

to that, it'll get so much easier.

Bryan Cantrill:

Yeah. And then that then that does allow you I mean, that gets you out from underneath some of the homogeneity that we've got in Yeah. At at levels of the system. Right? I mean, like, one of the questions we definitely have is what the Simon, you know, we're going back and forth on this about whether we're going to have is cloud code gonna be writing kernel drivers, right?

Bryan Cantrill:

Where the the the loop is more complicated there. Mhmm. You don't you don't have some of those things that you're talking about in the browser, don't necessarily have for something like a device driver.

Simon Willison:

Well, good to know, with the device driver, it either works or it doesn't. Right? Like you can Oh.

Bryan Cantrill:

Useful. There you go.

Simon Willison:

This is my naivety with hardware.

Bryan Cantrill:

Oh. Sharing up right now. I know.

Simon Willison:

If you can refuse the problem to a thing where the agent it's coding agent itself can tell if it got it right, it's easy.

Bryan Cantrill:

It's easy. If you can't, it's not easy. Yeah. And with the device driver you can't. Okay.

Bryan Cantrill:

Unfortunately. The the the it is really really hard. The and because it it then you have all sort I mean, it's not just the edge conditions, You've got performance. You've you just you've got it it's it's it's complicated, I think, to to but I think for those things that you can get that kind of reliability because the thing and I I think I said this as much my one year, but just to be clear, when I when Adam said I was out of my mind about vibe coding going out of the lexicon, but I think that certainly in my three year, we are going to be using LLMs to be more rigorous about the way we do software engineering.

Simon Willison:

Oh yeah. That's a one year, it might

Bryan Cantrill:

Yeah, absolutely. Yeah. And I think that that's gonna be a real, that's gonna be a big blip in general where it's like, no, no, no, this is this is not coming to replace your job. This is coming to help you do your job better.

Simon Willison:

Right. The thing right today with LLM's automated tests, longer optional. Continuation discretion, no longer optional. Good documentation that's actually up to date with code, no longer optional. And those things like in the past, we've been able to excuse, oh, we don't have a good test suite yet because we didn't have time.

Simon Willison:

That doesn't work anymore. You've you've got time now, but run Claude code overnight and you'll come you'll wake up to a test suite, and it'll be a bit shit, but it's better than zero. Yeah.

Bryan Cantrill:

Right. Yeah. Kinda is. It is just amazing this new world we live in.

Steve Klabnik:

I've been wondering lately if like one thing that has a really good test suite is the Rust compiler. And I've been working on a little programming language for the last two weeks, and I've gotten way farther than I ever expected to partially because I went spec first, and that's how this, like, sort of dovetails into that. But I've been thinking about, like, should it have just been a Rust compiler instead of my own little language? Because, like, there are so many tests for the Rust compiler. It's like they've done a very great job with that, and I'm really curious if that's something that similar to, like, I'm going to build this HTML five thing.

Steve Klabnik:

I'm going build a JavaScript implementation. Like, is someone going to make a Rusty? So So

Simon Willison:

here's a fun one. I think it's now easier than ever to introduce a new protocol into the world if you ship a conformance suite. Yeah. Like release a conformance suite and boom, overnight you'll have libraries in half a dozen languages because the conformance suite is the majority of the work.

Bryan Cantrill:

Yeah. That interesting. And then you also make it when you when you do that, you make it much more readily adoptable by other LLMs. Make it it's like

Simon Willison:

It overcomes the problem that it's not in the training data, and people are sort of kind of nervous that you could never launch a new program language now because it's not in the training data. But the context lengths are big enough now that if you can get it into a test suite and fit the instructions, the examples and how to use it in 10,000 tokens, it doesn't matter that it's not in the training data.

Bryan Cantrill:

Yeah. Ian, we got you up here. I'd I probably should gotten you don't know if you have any one year or three years, but you know, you've you've got such a great track record that we we look to you as our Nostradamus. Maybe you just strongly agree with me that vibe coding is going out of Lexicon, but I'll take that laugh. Adam that laughter is noted.

Bryan Cantrill:

That's derisive laughter.

Ian Grunert:

I feel like the only way that vibe coding leaves the lexicon is if the older generation makes the term uncool so the younger generation comes up with a new term that is cooler than vibrating. What

Adam Leventhal:

he's saying is you have a big lever,

Bryan Cantrill:

you I've have a big done this before, I know how to it's like it it it's like, Isn't come on kids. Isn't that hella cringe? It's a dad dad dad. Stop. Stop.

Ian Grunert:

Yeah. Just watch just watch me vibe this up. I'm vibing right now.

Bryan Cantrill:

Yeah. That's right. I'm just like you guys. I'm just vibing this up. Like, okay.

Bryan Cantrill:

We need another term. We need another term for this guy.

Ian Grunert:

Don't don't kill my vibe.

Bryan Cantrill:

That's right. So

Ian Grunert:

I do have a few predictions. On the one year, I have demand outstripped supply for Waymo rides from San Francisco Airport and the way that I measure that will be wait times greater than ten minutes.

Bryan Cantrill:

Yeah, interesting. That's a that's a great get that's a great prediction because, Simon, you said this a couple years ago that the absolute cheapest tourist attraction in San Francisco is a Waymo. Oh, yeah. So, like, wow.

Simon Willison:

Still is.

Bryan Cantrill:

It's so

Simon Willison:

good. It's $10. You get to go on a self driving car. It's the best.

Bryan Cantrill:

Right. It's like, why wouldn't I wait ten minutes for a Waymo? It's like I'm waiting I'm you know, I'm gonna wait for ten minutes for the Pirates of the Caribbean. Why would I not wait?

Simon Willison:

I don't think it's worn off. For me, it hasn't worn off. I've been riding Waymo for a year and a half. I still get that little, like frigion of glee when I get in a Weimos and it sets off on its own.

Bryan Cantrill:

Yeah. Well, and I actually I saw I I was in the apparently, it's pretty tight like cordon in the mission where the Zooks are riding around and yeah. And so I And that would

Steve Klabnik:

Austin too. Yeah.

Bryan Cantrill:

Yeah. And I I was with the I was trying to get on the Zooks, you know, I'm I'm on the Zooks waitlist. What it is, it's like you're it's enticing. You're like, wanna actually I wanna get in that. So Ian, great prediction.

Bryan Cantrill:

Is that one year prediction, Ian? Or is that what's the

Ian Grunert:

Yeah. That's a one year prediction cause they should be launching Rides from SFO for the general public this year. I have a second one year prediction.

Bryan Cantrill:

Yeah.

Ian Grunert:

So friendasinfriend.com, I think they will have under 10,000 activated devices at the end of the year, well under 10,000 but that's probably a conservative prediction. When activated devices someone has bought the thing and has actually sent at least one message to it.

Bryan Cantrill:

What is friend.com?

Steve Klabnik:

Oh my. Oh

Bryan Cantrill:

okay yeah what Go on.

Ian Grunert:

Okay. Brian Brian has not been to New York City this year.

Simon Willison:

Yeah. Yeah. Is that right?

Ian Grunert:

Oh. So God, you had campaign had a very large

Bryan Cantrill:

Oh, the oh, just but before you before you explain it to me, Adam, I noticed you've been a little bit Do you I I think Adam Adam also does not know what friend.com is, and he's he is relieved that that I that I put

Adam Leventhal:

him for my forever. I know. I I love I

Bryan Cantrill:

I I love friend.com. Yeah. I use it the way that one conventionally uses it.

Adam Leventhal:

Just like normal. Just

Bryan Cantrill:

normal friend.com. Just like normal, like all the other all the way you other folks use it. Anyway, go on. I'll let them explain how we all use it together.

Adam Leventhal:

Tell tell father time here how how

Bryan Cantrill:

do you how you use

Adam Leventhal:

this time.

Bryan Cantrill:

Tell funny McDuddy duddy how we actually how all the rest of us use this Yeah.

Adam Leventhal:

I was kidding.

Ian Grunert:

Well, this is true. We have a yes, yes, no, no on

Bryan Cantrill:

this one. We we

Ian Grunert:

do have

Adam Leventhal:

a yes, yes,

Bryan Cantrill:

no, no. Yeah. So tell me about friend.com.

Ian Grunert:

Yeah. So friend.com had a large subway ad presence this year in New York City but also in Chicago and I think they did a campaign in LA. The New York City ad campaign was not well received. Many of the advertisements were defaced by the New

Bryan Cantrill:

York City

Ian Grunert:

public. To the degree that there was there was a picture on I saw of someone went as the friend.com advertisement for Halloween. So they they printed up a a sweater of the friend.com advertisement and handed out Sharpies so people could deface Halloween costumes similar to the ads in the in the subway.

Bryan Cantrill:

Hey. You know what? I gotta hand it to you, New York. This is a very Bay Area thing you all are doing out there. We're we're you know, that's that's that's that's great.

Bryan Cantrill:

That is really terrific. Okay. So what what is so what what is it? Are they it's added to the face.

Ian Grunert:

It a it is a AI companion. It is a a $129 pendant that has a microphone in it that connects to your phone and it uses the microphone that could have just been the microphone in your phone but isn't for some reason to send messages to a AI companion which can respond to you by sending you I think it talks through the phone to you so it is kind of AI chatbot psychosis as a service or something.

Simon Willison:

Right. Fuel rate.

Bryan Cantrill:

Alright so this is like in the vein of the Rabbit R two or the Humane Pin and this is yet another AI wearable that sounds like it's and so you say destined for I I yeah. I'm really sorry that I've not I didn't get a chance to enjoy this whole ride. But thank you Ian for so you say less than 10,000 devices. That's a three prediction. Okay.

Ian Grunert:

That's a one one year prediction but yeah One

Bryan Cantrill:

year prediction. Okay.

Ian Grunert:

You're not gonna get to 10 the the three year would be that I'm pretty sure this company is going to flame out but Yeah. The one year is that this ad campaign does not really move the needle for them as a company.

Bryan Cantrill:

Oh my god. That ad just the and that's the kind of thing where it's like I know because I'm basically like a rule abider. And when I am tempted to deface things, it's like when I'm tempted to like run over that that the the the security bots, those little tones that whopper that that Samsung had that would run around and beep you. I'm like, you know, I think that I wanna throw you into the ditch means that you are I mean, that's like this is this is bad news for you.

Adam Leventhal:

Well, this is Brian why I think you you claiming that this is like the what's something this would never happen in the Bay Area. Bay Area people are rule followers to a much greater degree. Oh, yeah. Totally. This is a New York phenomenon.

Adam Leventhal:

Oh, yeah. Yeah.

Bryan Cantrill:

No. No. I'd like I I I love the I love the rebellion here. And then Ian, do you have a do you have some three years here?

Ian Grunert:

Yeah, so for the three year, I was thinking about the Windows 10 end of life and the claims of the year of the Linux desktop. And my three year prediction is kind of an anti on that where the prediction is Windows is still above 90% on the Steam hardware survey as of December 2028.

Bryan Cantrill:

Okay. That's it. And that that that's a good one or or a grim one. I'm not sure. Think that's are you counting that as Utopian or dystopian?

Bryan Cantrill:

I think it's

Ian Grunert:

think that it's here here's the thing. I think that Linux has gone from less than 1% to over 3% on the Steam hardware survey in the previous six years driven largely in part by Steam first party hardware so the Steam Deck in particular but also you know just Linux usage in general has gone up. I think the Linux usage is going to go up in the next three years but I still think that Windows is going to remain pretty dominant within that hardware survey. So it means that like they may go from 95 to like 92 or something and and Linux is gonna grow up to about 5% but I I suspect that the people who think that people are going to go out and replace their Windows 10 devices with a Linux machine or install Linux on their existing device to avoid buying a new device kind of a little optimistic about how much work people want to put into to their computing.

Bryan Cantrill:

I mean, can you imagine going back in a time machine and being like, oh, there's a year at the Linux desktop. How we're gonna have computers writing software in production before we have that is sorry. We are this is I although I have you I have tried to use ChatGPT and LLMs more generally on Linux audio problems. What's interesting is that it's actually not that helpful. That it's the I mean, they tell you the things that, you know, whatever.

Bryan Cantrill:

It's it's Linux. Linux audio is still undefeated is what I'd like to say.

Ian Grunert:

Part of the

Steve Klabnik:

part of the real struggle here is the kernel level anti cheat, which is, like, basically necessary for some genres of game that will just never happen with Linux. And so that's, like, I don't know. Some of this is about, like, the relative market size of those markets versus other ones. But, like, there's some gay like, I will never not use Windows because all the games I wanna play effectively require kernel level anti cheat to run, and so it just they're not gonna ever work on Linux.

Bryan Cantrill:

The hey. Adam, you know this podcast is really really arrived because my 13 year old daughter is texting me predictions that she has during the episode. Wow. I I she and, you know, I see whole demographic in

Ian Grunert:

some ways.

Bryan Cantrill:

This Apple didn't fall far from the tree. She thinks this is gonna be a major scandal involving Apple in the next three years. So I know what? I don't ask any follow-up questions. She also said that that she thought that the OpenAI guy was gonna go to jail, she told me.

Bryan Cantrill:

And I'm like, Sam Altman? She's like, I don't know who that is. I'm like, that's the OpenAI guy. That's who that's who you said is gonna go so sure. Sam Altman, if you're listening to this, please send us a cease and desist because we have that as a goal for this year.

Bryan Cantrill:

Okay. Let's go on to six years. Are we ready for some six years here?

Adam Leventhal:

Yeah.

Bryan Cantrill:

Simon, what do you got for us?

Simon Willison:

I've just got the one. I think the act of the job of being paid money to type code into a computer Yeah. Will go the same way as punching punch cards. Okay. I do not I think in six years time, I do not think it will anyone will be paid to just to do the thing where you type the code.

Bryan Cantrill:

Just type the code. Okay.

Simon Willison:

I think software engineering will still be an enormous career. I just think the software engineers won't be spending multiple hours of their day in a text editor typing out syntax.

Bryan Cantrill:

It will look like punching cards.

Simon Willison:

I think so. Yeah.

Bryan Cantrill:

Yeah. Interesting. In in six years. And but software engineering still very much exists.

Simon Willison:

I believe so. I I hope so. I very much hope so. Because I think the the challenge of being a software engineer is not remembering how if what what if what full loops look like. It is understanding what computers can do and how to turn fuzzy human requirements into actual like working software, and that's what we're for.

Simon Willison:

And I think we'll still be doing that, just a lot more of it

Bryan Cantrill:

and a lot more ambitious scale. And then, okay, you does the software engineer though deals with code? I mean, code is being written.

Simon Willison:

I think they do. Probably look at it occasionally.

Bryan Cantrill:

Okay.

Simon Willison:

They'll only occasionally. Yeah. Little bit. So I met

Bryan Cantrill:

Who debugs it?

Simon Willison:

I hate to say it. The agents debug it themselves.

Adam Leventhal:

Okay. Who who debugs your device driver that either works

Steve Klabnik:

with this? I I, like, working on this programming language, like, I'm doing my own code gen, and, like, Clot is happy to pull out GDB and just, like, debug the programs that it generates and why the, like like, the binary is wrong, and then backfill that into why the compiler is wrong. Like, it's better than I am, frankly.

Simon Willison:

This is

Steve Klabnik:

more about me than anything else, but, like, it's a thing that can do now. I mean,

Simon Willison:

this is a really interesting thing I've been saying just in the past three months around coding agents is that four months ago, I was absolutely on team. You cannot commit a line of code that you've not read, reviewed, understood that these things have written for you. Yeah. That's just irresponsible to do that. I'm edging away from that a little bit because it turns out the art of using this effectively is get them to prove to you that the thing they have written has worked.

Simon Willison:

The same way as like when you're working in the company, you don't review every line of code that another team has written, your team depends on. But you do talk to that team and you make sure that they are making a convincing case to you that the code works well and they've tested and they've covered the bases and so forth. It's a similar kind of thing and it's so uncomfortable.

Bryan Cantrill:

It is. I I it is beginning to give me the early onset of what they what they call deep blue. Yes. So but I mean, you cheer me up at the end there. That that that there's there that there is there's still a role for software engineers.

Bryan Cantrill:

Adam, do you have a do you have a six year?

Adam Leventhal:

Yeah. I have a couple. Dovetailing on your daughter's prediction, I predict that the the cell phone business is drying up because people are keeping their devices longer. So Apple has several new attempts for what the next flagship thing is gonna be. Oh,

Bryan Cantrill:

man. That's a good prediction.

Ian Grunert:

Oh, that's that's interesting. I have, like, almost the opposite prediction that's already written down here. I had phones remained the most popular form factor for personal computers in terms of units sold in the trailing twelve months.

Bryan Cantrill:

Because I but I do think this longevity thing is a real, real, real issue that these I mean, you've already begun to see this where people are like, why am I getting the latest iPhone again? Like, the camera's already awesome and I actually I care more about battery life. I care about like is it waterproof? I mean, I care about other things that that so alright. So Adam, how does this the I guess this does this happen after the major scandal in the next three years?

Bryan Cantrill:

I don't

Adam Leventhal:

know. Terrific. It must be on the heels of that scandal. Yes.

Bryan Cantrill:

Or or maybe the maybe the scandal maybe this somehow wrapped up in the scandal. Maybe the scandal is that they are that they're scandalously entering a new business or what have you.

Simon Willison:

That's right.

Bryan Cantrill:

No. I think that that it's got Apple and but Apple's got a ton of capital. So they could go, you know, they could

Adam Leventhal:

They could do a bunch more Apple Vision Pros.

Bryan Cantrill:

Yeah. Well, they So in the end, do you feel that because you say this on devices sold is still good. So you think that the the phones are gonna still find ways to differentiate? Or?

Ian Grunert:

I just think that there's that I kind of have the opposite view in that I think that think the phone sales may not go up but they're still just going to dominate in terms of units sold and there's no other form factor that has emerged that is more popular as a personal computing device.

Adam Leventhal:

Yeah I don't I don't think those are incompatible, Ian. I think what I think Yeah. You know, phones going down, it still could be the most popular form factor, and folks could be desperately Apple in particular, desperately trying to figure out what the next thing is gonna be. Okay.

Simon Willison:

Could I tag a prediction onto that, which is that if phones are not the most popular form factor, I think it's gonna be the Neuralink device of some sort.

Bryan Cantrill:

Oh, here we go. Neuralink in in six years. Is this how

Simon Willison:

No. I don't think it's gonna happen. Okay. But if phones Okay. If phones.

Bryan Cantrill:

If not phones.

Simon Willison:

If not phones, it has to be that. Noted. Because all of the other form factors, the little bracelets and things you talk to, that's all garbage. Nobody wants to talk out loud to

Ian Grunert:

their computer and Right. Right.

Simon Willison:

But if you can think to your computer in public, that's the thing that could knock the phone off its pedestal.

Bryan Cantrill:

And it will be the leadership of the pope, of the papacy, that that that tells us that we get the with the the leading the way with the neural implant. Okay. Interesting.

Ian Grunert:

There's a curse Yeah.

Steve Klabnik:

Curse prediction that's a mixture of all of these, which is of course appleacquiresfriend.com.

Bryan Cantrill:

And and it's less than 10,000 devices.

Ian Grunert:

I have a a second device prediction for six year which was I predicted more Macs sold in the trailing twelve months than any smart glasses or AI companion devices.

Bryan Cantrill:

This in the trailing of six years. So in five years, you got more Macs than anything else. Then then then Yeah.

Ian Grunert:

So it's like when when the six years is up, we look back in the previous twelve months.

Bryan Cantrill:

It's like, hey. It's all laptops. It's it's laptops and phones. It's the same.

Ian Grunert:

Yeah I'm saying that the laptops more specifically Macs so it's not actually the laptops it's the Mac line because I think that's the only thing that you can get number of units on roughly. I think that more of those are going to get sold than any smart glasses or AI companion devices. And I'm saying Macs specifically like I think that you know laptops is definitely going to be is bigger than Macs. I'm saying that like these smart glasses and AI companion devices are just not a real volume seller at all.

Bryan Cantrill:

Yeah. Agree.

Ian Grunert:

Totally any agree. Like real degree.

Bryan Cantrill:

Yeah. I totally agree with that. So I'm gonna say that the the DSM adds LLMs as a contributing factor to psychosis. The same way the DSM treats LLMs the way it treats kind of like cocaine Uh-huh. Where you can have I've

Steve Klabnik:

been a lot in the early days of the profession and then looked back as a mistake of having

Bryan Cantrill:

Well, No. Because I think you would mean we're we are and just you said the lobster's issue earlier. I think that we are we are going to have an increasing number of incidents of LLMs

Simon Willison:

resulting in psychotic behavior. Okay. How's the DSM got anything about social media in right now?

Bryan Cantrill:

So so right now they do have on like internet gaming for example. Okay. And they but I think this is gonna be more this is gonna be faster than internet gaming. Because I think that with where gaming is looking more at at social isolation and and and some kind of modicum of dependency versus like no no you that like the the the LLM got you to do something that you would not have otherwise done. That you had this delusion that that that you're you're I see.

Bryan Cantrill:

That your mother that that that your mother was involved in a global conspiracy. It and and you burned down your

Simon Willison:

You're betting against the AI labs being able to tap the stuff on the I I I which I think is

Bryan Cantrill:

a fair bet. I I I think it's well, it's more than I'm like, I'm just betting I'm just betting on crazy in that like I think that that like you you can't there's no amount of safety that you can put in place that allows these things to be used and not I'm not sure I don't know that they will be liable. I think it's gonna be more like for for diagnosticians to be aware of like, hey, if you're talking to a patient, like do they have this kind of idea because of the the LLM, have they been having conversations with their LLM about this? I mean

Simon Willison:

it feels like we need this today.

Bryan Cantrill:

Oh no, think we do. I think that this I think that the reason I was saying earlier at the top that I was struggling with six year predictions, the DSM moves slowly. So that's why I that that that's why this is a six year prediction and not a one year prediction.

Adam Leventhal:

And this is well beyond Deep Blue at this point.

Bryan Cantrill:

This is this is well beyond Deep Blue. That's exactly well, no, because this is not like a feeling of on I

Simon Willison:

got it. It's delusion.

Bryan Cantrill:

It's delusion. Is delusion. Is a psychosis thing. And I again, we have already seen this. We and I think we will, and it's an accelerant.

Bryan Cantrill:

It's like substance abuse. You got people that can have a, that can use substances without actually developing this kind of psychosis and then others that develop a real psychosis around it and I think that we'll see the DSM become aware of that. So I think I think you will also have you will have I think actually in three years but certainly in six you're gonna have people trying to use as a legal defense, the LLM made me do it. I'm not blaming the actual frontier model, I'm actually it's the the LLM that that did gin me up and talk me into doing this. This do doing this this illegal act whatever it might be.

Ian Grunert:

Are they also gonna use it for for stock buyers? Are they gonna be like the LLM told me to buy the stock, I didn't use any insider information to be able to to trade on it.

Bryan Cantrill:

Absolutely. Absolutely. The the this is, you know, the the kitty did it is what the we with the with the you know, when you got the toddlers, the of everyone blaming the LL upside. No. Absolutely.

Bryan Cantrill:

The LL told me to buy the stock. Oh, actually, if I shoot. I forgot one of my three years. I do think ads are going to enter LLMs. I I I I and I think that you're and I think it's gonna be an issue

Adam Leventhal:

where we Like product placement? Be like,

Bryan Cantrill:

you know, what would go

Adam Leventhal:

great with this recipe is a Coca Cola.

Bryan Cantrill:

The the I I think product placement and where you are actually either putting your thumb on the scale of what of the output or getting more of the input. So the because I mean you think about like the view that these chatbots have on the kinds of questions that we're asking and boy if you were developing you know if you were in marketing or you're developing a product, wouldn't you love to know what people are searching? And it feels like it's like that's something you would pay for and it's something that you know, it's like I think these guys will sell it to you in post to the AI bust that I'm predicting roughly in three years. So you know, did all my all my predictions

Simon Willison:

try to

Bryan Cantrill:

hang together.

Simon Willison:

ChatGPT knows when you're pregnant because you tell them.

Bryan Cantrill:

Yes, absolutely. Absolutely. The old adage of like, I think it was like Target, right, that famously knew

Simon Willison:

Famously, apparently that wasn't real. The Target guessed someone's pregnant from their purchasing habits. Apparently that doesn't hold up.

Bryan Cantrill:

That make yes. That that's a relief because that didn't make that kind of didn't pass the smell test at the time so. Then Are I saying

Ian Grunert:

that like the chat GBT equivalents are going to integrate as as a first party or are you taking a like SEO black hat view of like people going to work out how to get their data into the training data such that when someone asks what the best laundry detergent is then the model will spit back, oh it's definitely Tide and you should not use any other brand.

Bryan Cantrill:

I was not predicting the latter but I think the latter is a great prediction. So I strongly concur with the latter but I think think there's gonna be a need like, there are gonna be other kind of commercial vectors here, not some of which ultimately it's gonna be ads at some level. It's gonna be getting you to buy product. The Adam, did you have other other six years?

Adam Leventhal:

Yes. I think you're gonna like this one too even though it sounds insane as I read it. I think Tesla is gonna be out of the consumer car business. I think they're gonna be selling batteries. I think that they're gonna be selling fleets, but I think that they are not gonna be selling to individuals.

Adam Leventhal:

And it it and their numbers are, like, down year over year for the last two or three years, and I think that's gonna continue.

Bryan Cantrill:

Do they sell whatever the plural of optimists is? Is that OptiMi? What is the plural of optimists? The Tesla bots. Does that does that ever come to fruition?

Bryan Cantrill:

Is that what they sell?

Adam Leventhal:

Oh, sure. Yes. It's it's It's It's bots. Yes. It's theirfriend.com.

Bryan Cantrill:

It's theirfriend.com.

Adam Leventhal:

Okay. Yeah.

Bryan Cantrill:

Yeah. But Yeah. Well, I I love this prediction, obviously.

Adam Leventhal:

But but Battery I mean, batteries is already a big part of their business and arguably the cars are batteries and then and fleets.

Bryan Cantrill:

And fleets. Okay. So they are out of the consumer car business. Yep. I do love that one.

Bryan Cantrill:

I'm gonna add that Nvidia's peak valuation in six years we will see was in 2025.

Adam Leventhal:

So I

Bryan Cantrill:

I I think we are past NVIDIA. This is not stock this is not investment advice, although although this one is definitely if you think it is investment advice and you act on it, if you could please send us a cease and desist, we'd appreciate it. Yeah. Exactly.

Steve Klabnik:

Vlad, if you're listening to this, please put all your money into shorting NVIDIA.

Bryan Cantrill:

That's right. I think that and this is not a slight on Nvidia. I think that the valuation is it it it it's simply too high. And there's too much competition too much that that there are too many I mean, we talked about Gemini last year and I mean Gemini not trained on NVIDIA GPUs. I just think there's there's just too much out there.

Bryan Cantrill:

That too many headwinds ultimately for them for for that valuation. Think it absolutely a going concern, and a well excusing business. But

Adam Leventhal:

This dovetails into one of my predictions too. It maybe justifies it. But I say in six years, Jensen hands over the reins at NVIDIA and to a successor CEO. Maybe on the on the back of the the dwindling stock.

Bryan Cantrill:

And is that CEO Pat Gelsinger?

Adam Leventhal:

No. I think he's he's focused on his faith based startup.

Bryan Cantrill:

His his his faith based LLM startup. Yeah. Yeah.

Ian Grunert:

I mean, he'll be like 68 or 69.

Adam Leventhal:

Jensen. Yeah.

Ian Grunert:

Is that right?

Bryan Cantrill:

Yeah. Yeah.

Ian Grunert:

Yeah. I mean, almost 70 year old man who has infinite wealth decides to retire does not seem Sure.

Adam Leventhal:

Okay. Bet against it. That's fine. That's fine. But but but look at like Morris Chang who at age, don't even know, is still going strong.

Simon Willison:

Mhmm. Or Larry Ellison.

Bryan Cantrill:

Yeah. Or, Yeah. I was gonna go Pierre Lamond, but, yeah, Larry Ellison, fine. Sorry this podcast. Alright.

Bryan Cantrill:

Steve, do have any six years?

Steve Klabnik:

My my six year is boring, but it's funny because it shouldn't be boring. It is, which is AI will not have will have not caused the total collapse of our economic and governmental systems.

Ian Grunert:

Like

Bryan Cantrill:

I I You know? That's a very optimistic prediction. That's great.

Steve Klabnik:

Yeah. Yeah. I I'm choosing to be optimistic here, I think. Anyway, I mean, I I you know, there's some ways in which that could be a pessimism and not an optimism, but I'm gonna I'm gonna say

Bryan Cantrill:

that, like, didn't Humanities could be okay.

Ian Grunert:

You didn't predict that economic collapse wouldn't happen. You specifically said that LLMs are not working in the right house. By

Steve Klabnik:

AI. Yes. Correct. Yeah. Yeah.

Steve Klabnik:

I think we're gonna figure it out, and I think that a lot of the anxiety right now and worry about it is anxiety and worry, and this humanity is resilient. And change is gonna happen, but we'll we'll be okay.

Bryan Cantrill:

It's gonna be fine. And this is the affirmation tape that you listen to when you're beginning to suffer from Deep Blue. This is the the the the you Steve Klapnick reads, this is you know, you put your headset on as you're going to sleep. And

Adam Leventhal:

I had

Steve Klabnik:

a very optimistic 2025, And so I think I'm gonna try to I'm trying to continue that into the future. We'll

Bryan Cantrill:

see. That's that is great. Adam, do you have any other six years or are we gonna end on the optimistic note?

Adam Leventhal:

Let's end on the optimistic note.

Bryan Cantrill:

Translation, I I do have another six year but it's way too grim. Well, good. I think that you know, I I I think we you know, the a common theme from this year I would say is the LLMs really transitioning into a useful tool into the hands of practitioners. I think that they that and the demise of friend.com, I I would say are the are the two big themes.

Simon Willison:

And and the rise of the capital.

Bryan Cantrill:

Absolutely. I'm gonna go check out the parrot. I'm gonna go check out the parrot. As long as I learned the parrot survive coding though, I'm gonna be very upset because that's gonna run contrary to my to my one year prediction. Alright.

Bryan Cantrill:

Well, this has been great. Thank you all for for joining us. If you do have predictions and you and I'm actually gonna the Mike Caffarella joined us last year. He could not join us this year. Adam and sent me some of his predictions so I'm gonna drop those into the chat so we've got those on the record.

Bryan Cantrill:

If you do have any predictions get those in the record and we'll have PRs open as well you can get PRs in there. But with thank you all for your predictions. We've said before predictions tell us much more about the present we think than about the future. But don't know maybe these little maybe maybe this year is the exception. And we're gonna we're gonna learn learn a lot more about the about future.

Bryan Cantrill:

I do think Deep Blue has got I mean, that

Steve Klabnik:

It's still

Bryan Cantrill:

like It's it's very good, Adam. I mean, it's really

Adam Leventhal:

Yeah. It's got It looks If people have predictions, whether you're listening live right now or on YouTube or on the podcast, if you go to the show notes on GitHub, if you wanna drop your predictions in, it'll give us an opportunity to review them in one, three, and six years. So feel free to submit a PR.

Bryan Cantrill:

Awesome. Thanks, everybody, and here's to a great and hopeful 2026. Go check out the parrots.