TBPN

This is our full interview with OpenAI CEO Sam Altman, recorded live on TBPN.

We discuss Anthropic's Super Bowl ads, Codex 5.3, why managing AI agents is the next interface shift, and how chips, power, and compute bottlenecks will shape the future of AI.

TBPN is a live tech talk show hosted by John Coogan and Jordi Hays, streaming weekdays from 11–2 PT on X and YouTube, with full episodes posted to podcast platforms immediately after. 

Described by The New York Times as “Silicon Valley’s newest obsession,” TBPN has recently featured Mark Zuckerberg, Sam Altman, Mark Cuban, and Satya Nadella.

What is TBPN?

TBPN is a live tech talk show hosted by John Coogan and Jordi Hays, streaming weekdays from 11–2 PT on X and YouTube, with full episodes posted to Spotify immediately after airing.

Described by The New York Times as “Silicon Valley’s newest obsession,” TBPN has interviewed Mark Zuckerberg, Sam Altman, Mark Cuban, and Satya Nadella. Diet TBPN delivers the best moments from each episode in under 30 minutes.

Speaker 1:

Well Without further first guest of the show.

Speaker 2:

Sam Altman, the CEO of OpenAI. He's in the restroom waiting room. Welcome to the show, Sam. How are you doing?

Speaker 3:

Welcome back. Good. Thank you guys for having me back.

Speaker 2:

Thanks so much. Big day. Big day. Kick us off with where should we start? Should we start with the model or frontier?

Speaker 3:

Can we start with the model just because I'm Let's do it. Jonathan.

Speaker 2:

Yeah. Absolutely. Break it down. What'd you launch? What's we the launched

Speaker 3:

5.3 codecs. It is, I think, best coding model in the world. We took a lot of the feedback that people had about five point two five point two codecs and got it into one model. It is much smarter at programming, but it's also way faster. You can interact with it mid turn.

Speaker 3:

I think it's got a much better personality. It's really good at computer use. So it feels like a very big step forward. It was funny as we were, deploying it this morning. A couple of very, like, extremely experts using these models noticed and said, man, something's really different with Codex, and they, like, caught it mid deploy.

Speaker 3:

So I think you can really feel it quickly.

Speaker 2:

That's great.

Speaker 1:

Oh, you're saying people outside of OpenAI just everyday use

Speaker 3:

Yeah. I like the set of hours that we put it out before we you know?

Speaker 2:

Talk about, interacting with a mid turn. How does that work? Why is that important? What does that unlock?

Speaker 3:

So people are starting to use these these tools for very long pieces of work at one time, you know, multi hour tasks. And sometimes you don't specify it correctly. Right? Sometimes something's not set up right. Something someone just screws up.

Speaker 3:

The ability they they can do amazing things with no steering, but they can do much more amazing things if you steer them along the way. So this has been this is one of the things I felt most new about this model.

Speaker 2:

So talk about orchestration, how this fits into Frontier because I Yeah.

Speaker 1:

One one second. It's it's notable, like, if you see a coworker Yeah. Making a mistake Yeah. And you don't interrupt them, that's rude. Right?

Speaker 1:

Like, it's it's it's deeply inefficient.

Speaker 3:

It's It is incredible what these models can do without any feedback. Like, if you think about a a new coworker, especially Mhmm. You know, you train them and you give them a lot of feedback early on and they learn the job and you correct them and they kind of get practice in. The models, they will soon do that, but right now they don't do that. So we just rely on either they get it right one shot or we collect them, correct them all along the way.

Speaker 2:

Yeah. I think there's a lot of people that are running multiple agents and multiple tabs. They're starting to think about orchestration. Feels like Frontier is a piece of that. But if you're interacting with a model that's running mid turn, like, does the user experience change for developers with 5.3, and then what will it look like in the frontier world?

Speaker 3:

I I think we will be heading towards a workflow where a lot of people just feel like they're managing a team of agents. And they'll keep as the agents get better, they'll keep operating at a higher and higher level of abstraction, which at least watching what's happening so far is a is a jump that people are gonna make pretty well. Mhmm. The the models are so good now. There's such a capability overhang that building better tools to let people do that, which the Codex app that we launched on Monday was a great step forward for, will be very, very important.

Speaker 3:

But you will be managing very complex workflows. The agents will keep getting better, so you'll keep working at the maximum of your management bandwidth or cognitive ability to keep track of all the stuff. And the tools to make that easy to do will matter, I think, more than intelligence for a little while because there's such an intelligence overhang already.

Speaker 2:

Mhmm. What's the role of a forward deployed engineer today at OpenAI towards the end of the year? Capability overhang that feels like raw meat for a for a forward deployed engineer. It's like the they like, they solve that problem. Right?

Speaker 2:

Yeah.

Speaker 3:

I mean, look, eventually, the models will get so good that they'll help companies deploy themselves, and the forward deployed engineers will again get to work at a higher level of abstraction. But for now, you go into a company that is not AI native and say, okay, you've said you want us you know, they say they want to deploy AI. They really are not sure what to do. How do I hook this up to my systems? Do I need to fine tune a model on my code base?

Speaker 3:

Yeah. How do I think about orchestrating agents and using things from different companies? Most of all, at least what we hear most frequently is, how do I think about security of my data? And how do I know that these, like, AI coworking agents are not going to go access a bunch of information and share it in ways they shouldn't or get, you know, a context exploit or something like that. So the four deployment engineers take this incredible new technology and a platform like Frontier and say, we will connect your company to an AI platform so that you can use all these agents and workflows and everything else you want.

Speaker 2:

How important are these metaphors or how temporary are they? I was very interested in reading about Gastown and you have these pole cats and it's this whole Mad Max world. And, that feels like maybe just a temporary aberration where you're setting up agents for specific tasks, but also that could be incredibly valuable in explaining to a large corporation of how they're gonna integrate AI across the whole organization?

Speaker 3:

Yeah. I I suspect, like, everything else that's happening when an industry is moving so fast, all of this is somewhat temporary Sure. On, like, a long enough time scale. And you as these models become more capable, these agents are operating on very long time horizons with the ability to just kind of figure it out and our trust in their robustness keeps going up, then maybe you don't need a lot of the abstractions we need today. You know, maybe you just, like, have a a single AI bot that runs at your company, And you can say, hey.

Speaker 3:

I wanna, like, launch this new product, and it does everything an ambitious person would do. But that's not where we are today. So today, we have to

Speaker 2:

Yeah.

Speaker 3:

Put in a little more work to get the pieces put together.

Speaker 2:

Yeah. How are you thinking about the the the meter benchmark for long task horizon? You're at the top of the charts. At the same time, it feels like we might need a new chart if we're talking about agent swarms because they'll be able to do things that go for weeks, but they will subdivide the work. Yes.

Speaker 2:

There's some subdivision that happens within a reasoning model, but it doesn't truly parallelize, at least that I'm aware of. So what does it look like in a world where you go to a model, but now it's spinning up a whole bunch of different models underneath?

Speaker 3:

I think you're there's two of of the key insights of the whole fielder in this question. Number one, the implication that no chart in AI lasts more than a few years is right. And like this one, kind of, you know, we'll see how much longer it's really useful. Yeah. The second is a lot of people thought that, okay.

Speaker 3:

We're gonna just need a super long task and a super long task horizon, so we need a super long context. And definitely what people have already seen with coding agents is by agents breaking up work, orchestrating it well, farming it off to sub agents. Even with the current limitations of the technology, we can do something, which should not be surprising because it's similar to how people do things and get amazing amounts of work done. So that's been cool to watch. I think that will keep going.

Speaker 3:

A joke that some people at OpenAI make is that soon the chart that matters is just gonna be GDP impact. Yeah. And then the question is what's the one that comes after that? But everything else, these lot of these proxy metrics, there's now so much economic value in what the models are doing.

Speaker 1:

What do you think could come after?

Speaker 3:

I have no idea. Do you have an opinion? Happiness?

Speaker 1:

I don't know. When you look back at some of your when you look back at some of your blog posts from ten years ago, your predictions were usually pretty on point. Yeah. Maybe it's hard to predict. Thank

Speaker 3:

you. Thank you.

Speaker 1:

I the merch. The merch. Like, basically, like right now, we're getting, you know, thousands of messages in the chat about four o, and you predicted in 2016 that people could would would become, you know, very attached to a chatbot.

Speaker 3:

Yeah. I'm working on, like, a big prediction blog post for the next ten years seems too far, but the next five

Speaker 2:

Sure.

Speaker 3:

But it's like I'm sure a lot of it will be wrong and it you know, it's it's still fun to try. The the the sort of, like, relationships with chatbots, clearly, that's something that we gotta worry about more and is no longer a abstract concept. The the even the question of what comes after GDP, like, one reason I think that's interesting is the way we measure GDP now could start going down even though quality of life goes way up. Sure. And we don't have a lot of practice with things like that, but the the the massively deflationary Europe.

Speaker 3:

Not just Europe. We want quality of life going up too. Yeah. Just going down, you know?

Speaker 2:

Yeah. Switching gears, what do you think about the work that the the NeoLab boom, the research efforts that are happening all over Silicon Valley? It feels like there's there's an acknowledgment that there's research breakthroughs that need to happen, and everyone's taking different shots at those. Do you think that those companies will just find a breakthrough and join a lab or launch their own products?

Speaker 3:

Fantastic, first of all. Yeah. One of the meta things that we wanted to do when we started with OpenAI Yeah. Is when we started OpenAI is like, there had been a period where the technology industry and Silicon Valley in particular was amazing at new research labs or just doing new research in industry in general. Mhmm.

Speaker 3:

And then it kind of fell apart and there hadn't been a good one in a while. And part of what we hope to show, and this was not only us, like a lot of people were excited about new research labs Mhmm. Is that industry could do research again. Mhmm. So, seeing that now become like fashionable in all of these new labs, I think it's totally awesome.

Speaker 3:

Some will succeed, some will fail, some will kinda go into some other effort. But, having industry support research in, you know, startup style, I think it's wonderful.

Speaker 1:

Mhmm. Over the next two years, would you expect to acquire more individual product companies or more research labs?

Speaker 3:

Good question. I don't have a strong opinion there. I would bet. Well, I'd say the very best ones will often look like a mixture of both. Sure.

Speaker 3:

The the the one that I have in mind right now is something that very much looks like a Yeah. Mixture of both. So maybe that's may maybe the shape of things to come is the really truly extraordinary product work will have more and more research component. Yeah. And it'll be kind of a more of a hybrid thing.

Speaker 2:

Is data the new oil? Is there value?

Speaker 1:

Yeah. Were joking a couple days ago.

Speaker 2:

Bunch of data, but they don't understand AI. They don't know how to monetize

Speaker 1:

Well, and that phrase was effectively wasted a decade ago. Yeah. And so to say it now sounds really silly. Yeah. But it feels like it could be more true now than ever.

Speaker 2:

Yeah.

Speaker 3:

You know, certainly yeah. Man, they really did wasted a decade ago. I was just thinking of the kind of people that used to say that and

Speaker 1:

Yeah.

Speaker 3:

Came out of it.

Speaker 1:

TED Talks.

Speaker 2:

You're you're not supposed to call them out. They're nice people over at TED.

Speaker 3:

You know, definitely, like, the the sort of the magic relationship of this last eight years, whatever you wanna call it, has has been, like, you know, that we can put in more and more resources, compute data, new ideas, whatever, into creating an artifact, and it gets like the log of it gets better. Mhmm. So you can throw in you know, that's why we have this huge exponential increase in resource, but we keep getting better and better models. And for all of the concern people have about it's it's gonna, like, top out or it's slowing down or whatever, like, no one's been right about that. I mean, sometimes they it looked like they were for a couple of months as we digested a new model or came to a new form factor, but it has been incredibly smooth last six or eight years of this.

Speaker 3:

What those resources are, there can be some trade off between. You know, sometimes it's better to spend your money on better data, sometimes on more compute, sometimes something else. On the whole, compute power is the new oil is the statement that feels closest to true to

Speaker 2:

me. Mhmm.

Speaker 3:

But there will be other parts too.

Speaker 2:

Makes sense. Is software dead?

Speaker 3:

It's different. It's definitely not dead. But what software like, how you create it, how you're gonna use it, how much you're gonna have written for you each time you need it versus how much you'll want sort of a consistent UX.

Speaker 2:

Yeah.

Speaker 3:

That's all gonna change. You know, there have been a number of these, like, big sell offs of SaaS stocks over the last few years as these models have rolled out. There, I expect there will continue to be more. I expect there will be big booms in software. I think it's just gonna be volatile for a while

Speaker 2:

Mhmm.

Speaker 3:

As people figure out what this looks like. The statement that someone said to me that is stuck in my mind most these last couple of weeks is that every company is an API company now whether they wanna be or not. Oh, yeah. Because agents are just going to be able to

Speaker 2:

Write a CS Yeah.

Speaker 1:

We had we had Dara we had Dara from Uber on yesterday, and he had a pretty refreshing kind of approach. We were asking about integrating agents with Uber, and and he recognized that, yeah, the ad business could potentially be threatened if you can order an Uber and chat should be te. But he basically said, like, you have to think of the consumer. Yeah. The consumer wants to order an Uber via their preferred agent.

Speaker 1:

You should let them. Otherwise, you're you're gonna have other problems.

Speaker 3:

Yeah. And that that is the right take for sure. And I don't or I think so, least. Yeah. And, you know, we've been through platform shifts like this before where you I mean, Uber wouldn't have existed without one.

Speaker 3:

It wasn't until iPhone where you could, like, have it make sense to order an Uber to right where you were

Speaker 2:

Yeah.

Speaker 3:

As you're out in the world. So I think there will be totally new things that happen. Other things you'll use in new ways. But definitely, as I've started using Codex, my excitement about having agents go off and do things for me and still use other services, pay other services, surely to figure out new business models and Mhmm. How revenue gets shared around, but that will happen.

Speaker 2:

Yeah. Talk more about Codex desktop. One more

Speaker 1:

question on on SaaS. Have any public market SaaS companies tried to get a soft landing with OpenAI? And do you think there's any there's there's any value? Just let's say

Speaker 3:

Not that I'm one has no public SaaS companies that I'm aware of have tried that Okay. With OpenAI. Look, think some some them

Speaker 1:

some of them will certainly be durable and and are on sale right now and and potentially just need new energy and and, need kind of a new, an entirely new approach, and maybe OpenAI could provide that.

Speaker 3:

Yeah. Yeah. I think some will be incredibly valuable. Some do feel like a thinner layer now. But I don't know.

Speaker 3:

Like, I was talking recently to a bunch of SaaS companies, and they do not they do not feel unexcited. Like, they're like, we're gonna go through a big transformation here. And, you know, yeah, sure, other people can instantly write software now, but so can we. And we got a great system of record and seems reasonable. Someone will make it, of course.

Speaker 2:

Yeah. Yeah. Yeah. Talk more about the the Codex desktop rollout. It feels like, you know, successful amount of downloads, but but key, like, a shift for, you know, people who are maybe lightly technical but don't have time to set up an IDE and configure an environment to actually start writing software.

Speaker 2:

I wanna know about plans to integrate to the phone. That was a big moment, I think, for a lot of people with the the ClawdBot, MoltBot, OpenClaw thing was like, oh, I can text something, and it will go and write code, and that's valuable, and that unlocks a new agentic experience. Like, where do you see the Codex desktop ecosystem going?

Speaker 3:

I am so Codex desktop has been a a somewhat of a surprise to me in terms of how much people love it, including how much I love it myself. You know, the I think it's a great example of 10% of polish of the experience of using these models, especially when there's so much capability overhang, goes an extremely long way to what you can build and how how you interact with this stuff. Of course, we should have an ability to kick off new tasks from mobile, and we'll do that. I mean I mean, really, what you want is, like, your single AI that's working for you on a unified back end access to all of your data and your ideas and your stuff and your memory and your all all the context and the ability to work across a lot of surfaces. And often, you'll be at your desktop.

Speaker 3:

Often, you'll be on your phone and you just wanna add something in. But it it is a pretty profound shift in my own workflow, not just for coding tasks, but more general purpose tasks. It's still kinda hard to use if you're not at least reasonably technical, but obviously, we'll find a version of this product that can do other knowledge, work tasks, control your computer and things like that where you don't have to be. And and it'll bring the magic of building stuff, really, to a lot of people because even if you never look at code, you'll be able to build something reasonably sophisticated. One of the things that I have built when I was playing around with the new Codex app is this thing I had always wanted.

Speaker 3:

Just like this magic auto completing to do list. Oh, yeah. I I really work with to do lists and this idea that I could just put tasks in and it would try to go do them. If it complete them, I could complete them. If it needed questions, it would ask me questions.

Speaker 3:

If it needed you know, if I had to do something, could still do it the old fashioned way. But an interface like that where, you know, all the stuff you wanna do, you just sort of explain to a computer or your AI and it tries to go off and do them. And sure, if you're on your phone, you're gonna just add a task on your phone or, you know, if you wanna easily import something from email, you're gonna do that. Like, felt feels really good. So I'm excited about all of the ways that this will just become a general knowledge work agent.

Speaker 1:

Yeah. Were you, unsurprised to see a product like OpenClaw come from open source? Because it's certainly that kind of user experience is not I would imagine this is something that you knew would be a thing, and yet, I think part of the the magic of OpenClaw is that it would be very, very difficult for a large tech company

Speaker 2:

didn't make many phone calls to hyperscalers to say, hey, I'm gonna be integrating with your API. It just went out.

Speaker 1:

And you guys, you know, and and when I think back on like the Sky acquisition Yeah. This this kind of experience was probably very top of mind and things that you're working towards internally.

Speaker 3:

I love the spirit of everything about OpenCLO.

Speaker 2:

Yeah.

Speaker 3:

And you are totally right that it's much easier to imagine a one person open source project doing something like that than a company who is gonna be afraid of lawsuits and data privacy and everything else. You know, they're like, I think this is kinda how innovation works. There there something like that starts. It's clearly amazing. There will be a way to make a mass market version of that product, but letting the builders build the equivalent of the home brew home brew computer club spirit go here is so important.

Speaker 2:

Yeah. Totally. Can we switch to social? I feel like if I if I Google Sam Altman social, I get pure AI in Sora and then also demand or or predictions about a human only social network. Where do you see social going broadly?

Speaker 2:

How do you wanna integrate it with it and power it in the future?

Speaker 3:

I the mold book thing was like a very interesting social experiment to watch and I think points to agents interacting in some sort of social space, hopefully, on behalf of people, at least in some degrees, could be quite interesting. I don't think we know what to do there yet, but it feels like social is gonna change

Speaker 2:

Mhmm.

Speaker 3:

A lot. And I am interested in the space of what social experience can look like when your agent is talking to my agent and coming up with new stuff. Clearly, putting, like, a lot of AI bots on the existing social platforms is just making everyone crazy and not that fun. So that's not the right answer. Yeah.

Speaker 3:

But I think we can design something new for what this technology is capable of that will feel good and useful.

Speaker 2:

Yeah. Is there is there a solution to the bot problem that's just all the labs sort of integrating with all the other platforms? And and even if you can't detect it's AI generated, you can literally say, we just we just generated those tokens. Like, those exact tokens are in our database.

Speaker 3:

Can't do Okay. That because their open source models are, like, good enough to write Sure. Points.

Speaker 2:

Yeah.

Speaker 3:

Yeah. I am excited about sort of, like, assertion of humanity Mhmm. Instead of trying to, like, detection of AI as a as a thing here. I don't know if the social platforms it's, like, in their interest to solve this. It is cause it's creating, like at least in the short term, it creates, like, a lot of engagement and increased usage.

Speaker 3:

So I believe they could solve it if they wanted to. I'm not sure it's in their interest. I'm actually not even, like, it is I don't like it, but it's is some people do seem to enjoy it.

Speaker 2:

Yeah. Can you talk a little bit more about where Sora as as a video generation model is going? It feels like tool use is maybe under discussed, you know, adding adding reasoning. It's not just the diffusion model. It's giving

Speaker 3:

Yeah.

Speaker 2:

These models the ability to make linear cuts and overlay motion graphics. And when I scroll the Instagram Reels that I see, they're like Vibe Reels with cut cutouts and it flips negative and it's all color graded and stuff that like Yeah. You could probably diffuse it all, but it's pretty cool just to teach a model to also use After Effects or whatever video, you know, motion graphics suite you wanna use. Is that an interesting unlock? What do you see So going

Speaker 3:

all of that stuff will happen, I agree with you. The models will get really great at doing that. People love generating videos. Sure. I would say people we have not yet found a way that people really love watching other people's videos.

Speaker 3:

Mhmm. This is true for a lot of other AI. Like, they love to you know, people love talking to ChatGPT or whatever. It's not that compelling for most people to, like, read other people's ChatGPT generation. So I think there is something

Speaker 1:

But isn't that for all writing and all video?

Speaker 3:

Yeah. It seems stronger to me in this case than the general case, but maybe you're right. Maybe this

Speaker 1:

is Yeah. But if but if if somebody says, hey. I generated a fifteen minute video. I'm I'm really excited for you to watch it, and you watch the first ten seconds and you're not that captured by it. I don't care that it was human made.

Speaker 1:

Yeah.

Speaker 3:

You're right. And it this is it's not a it's not a special case.

Speaker 2:

Do you see that in the data with with Sora downloads? Because I've noticed that I'll generate stuff on Sora, download it, and share it to a group chat, and then it's this little in joke that me and five other people get, and we see this, like, family group chats of, oh, it's our dog and our kids. But, like, there's not really, like, okay. Yeah. I this is a business.

Speaker 2:

You know, everyone likes this.

Speaker 3:

Absolutely. The the I would say that the most common use case is something like that. Sure. You know, like memes on group chats is a real killer use case of sorts.

Speaker 2:

How is the Disney rollout going? I was super excited about it. Jordy was extremely bullish on it from a business strategy perspective.

Speaker 1:

When you look at how image models have grown various LLMs historically, and now you're gonna have an image Yeah. Image and video model that can do something that no other LLM can do at least legally.

Speaker 2:

Yeah. And he's Bob Iger joining OpenAI.

Speaker 3:

Bob Iger, I love it. No. No credit plans for that.

Speaker 2:

Because he's he's gonna be looking for a job and he's a free agent.

Speaker 3:

Go pick him up. Hit him up for us, you know, do some recruiting. That'd that'd be great. The I think that generating characters in images and videos is going to be very important to people, and they really like that. Yeah.

Speaker 3:

Like other like we were saying otherwise, I don't think many people, like, wanna watch me and some Star Wars character doing something together, but I might think it's cool. Yeah. It's you know, there there's like a real trend going on right now with ChatGPT where it's make a caricature of me and my job based off of everything you know about me. Yep. And those kinds of things, people actually do, like looking at other people's Yeah.

Speaker 3:

Media.

Speaker 2:

Yeah. It's almost like a like a face filter or something. There's like enough of it's the Studio Ghibli moment. Like, there's enough of the human still in there that it's not it's not you can't yours is not the same as mine, so it's still personalized.

Speaker 3:

It's personalized, and it says something about you and, you know, the the the like, a lot of these things a a a lot of what's gone viral before with ImageGen, I think it's like if you can make people look a little bit more attractive or cool than they look in than they are in real life. Yeah. Without sort of having to ask for that.

Speaker 2:

Yeah. How are you thinking about the actual the actual rollout? We were we were debating between, like, open the floodgates, you can generate any Disney property versus, like, it's One character. It's Spider Man week, and everyone's posting Spider Man, and then it's it's, you know, Mickey Mouse week, and there's another viral moment.

Speaker 3:

I'm not sure what the team is planning there. I know Disney's had some different opinions about how they what they wanna do and try to be a good partner there, but I'd be excited to open the floodgates, personally.

Speaker 2:

Oh, that'd be fun. Cool. Talk about your first speaking of video spray, your first Super Bowl ad. It felt like not generated, lots of motion graphics, the black dots coming together. Like, what was the what was the goal with that ad?

Speaker 2:

Who were you trying to speak to? It didn't feel like a direct response QR code. Download the app. No. What was the mission?

Speaker 3:

I love that ad. I think that was such a cool one. It was clearly not meant to be like a mass market or direct response ad. Yeah. But, you know, speaking to the, like, people who are at the center of this revolution and just trying to, like, celebrate everything that has come before and everything that will come after.

Speaker 2:

Yeah.

Speaker 3:

It we didn't hear a lot about it from, like, average users of ChatGPT, but we heard a lot about it from, like, researchers in the field and a lot of residents there. It was definitely not generated. It was done the old fashioned way. Yeah. And, you know, it had, like, a lot of people loved it and a lot of people hated it.

Speaker 3:

And then many people in the middle didn't get it, and I felt okay about that.

Speaker 2:

It's a great encapsulation.

Speaker 3:

I like our ad for Sunday. Okay. It's about Kodak's. No surprise. But

Speaker 2:

Yeah. Talk about the evolution of the advertising to be more just clear about the actual use case, the value. Like, what are you trying to say with your advertising strategy now as it refers as it relates to, like, video?

Speaker 3:

Well, the thing I would most like us to say, and I think this is a new challenge given where the models are, is to teach people what they can go do with AI. Yeah. I mean, AI is now unbelievably capable, and most of the world, it's still like asking it basic questions on ChatGPT. Mhmm. Everyone can go build amazing things now.

Speaker 3:

Everyone can go do all kinds of work. Scientists are gonna make new discoveries. And I'd like to you know, to the degree that advertising we do can teach people how to use this, I I think that'd be awesome.

Speaker 2:

Yes. So the KPI is, like, reduce the capability overhang broadly.

Speaker 3:

I think that's should be a general KPI for us, not just of our ads. Yeah. The products that we build, how we teach people to use those products, like, the the that feels very important.

Speaker 2:

Yeah. Anthropic also has a bunch of ads in the Super Bowl. Seems like they run a ton.

Speaker 1:

Damn heard.

Speaker 2:

What do you think the

Speaker 3:

You laugh.

Speaker 2:

Getting wrong about their characterization of how ads will roll out in chat apps?

Speaker 3:

Well, it's just wrong. Like, the the the main thing that I think is we are not stupid. We respect our users. We understand that if we did something like what those ads depict, people would rightfully stop using the product. No one like, our our first principle with ads is that we're not gonna put stuff into the LLM stream.

Speaker 3:

That would feel crazy, dystopic, like bad sci fi movie. So the main thing that's wrong with the ads is, like, using a deceptive ad to criticize deceptive ads feels I don't know. Something doesn't sit right with me about that.

Speaker 1:

I asked I asked Claude what if if what the definition of playing dirty, and it said what did it say? Misleading others about your intentions, hiding information, or creating false impressions.

Speaker 3:

Yeah.

Speaker 1:

Thought it was a little dirty. I thought it was well played, but it was it was

Speaker 3:

It it was well played for sure, and it was a funny ad and and they, you know, like, the sort of the stuff about the chatty PT personality that most annoys me, which we'll fix very soon, I thought they nailed in the ad. So that part was funny.

Speaker 2:

Yeah.

Speaker 3:

But I don't know. You know? Like, I also like, I think it's great for them not to do ads. We have a different shaped business. I did notice that they said in their thing, like, we may later revise this decision, and we'll explain why.

Speaker 3:

So Yeah.

Speaker 1:

The the We're pro ad. Blog The blog post was, kind of did a good job of disarming the pro ad people Mhmm. Gave themselves an out Mhmm. In the future. Mhmm.

Speaker 1:

Do you think they care?

Speaker 3:

I think it doesn't matter. Like, I think it's a side show. You know, people are excited for a food fight and between companies. But, like, the the amazing capabilities of these models, the product, the kind of, like, the groundswell of excitement around codecs Mhmm. That feels way more important to

Speaker 2:

me. How do you stop the pausing that happens in voice mode? Do you need new hardware for that, or is it a model capability thing?

Speaker 3:

We need new model. We may need some new hardware too, but mostly we just need a new model. I think we will have a great voice mode

Speaker 2:

Yeah.

Speaker 3:

By the end of this year.

Speaker 2:

What's the bigger bottleneck, energy or chips?

Speaker 3:

It goes back and forth. Right now, again, it's chips.

Speaker 2:

Chips. Is there anything

Speaker 3:

But it it you know, it'd be different different times.

Speaker 2:

Is there anything we, like society in America, should be doing more aggressively to increase the supply of fabs? Or fab capacity?

Speaker 3:

Yeah. I think it is well, it make itself on its own. It may like, normal capitalism may solve it, but I think somehow deciding as a society that we are going to increase the wafer capacity of the world and we're going to fund that, and we're going to get, you know, the whole supply chain and the talented people we need to make that happen would be a very good thing to do.

Speaker 2:

Do you think there's an upper bound on model IQ? Like, the the race right now is is you're smart, but you're not smart for days. You're smart for hours. Can you go much further and get much smarter?

Speaker 3:

It seems certain. Seems certain. Upper bound? I don't know. I don't know how to think about that question yet.

Speaker 2:

I can't even yeah. Can't even reason about what 2,000 IQ looks like, you know? Like, I don't even don't know what that means.

Speaker 3:

It's funny you say I mean, I I I can't reason about what it means to think about a problem for, like, ten thousand human years.

Speaker 2:

That's another good one. Yeah.

Speaker 3:

Yeah. That's crazy. But maybe IQ is gonna feel even even weirder. I I Yeah. I don't know.

Speaker 3:

I feel like I somehow feel like this isn't going to feel as strange as it sounds. And the like, for a bunch of reasons, we're so focused on other people. We're so focused on our own lives. We're so focused. We have such a human centric nature that, like, okay.

Speaker 3:

This thing is really smart. It's inventing new science for us. It's running companies for us. It's doing all this stuff. And that sounds like it should be impossibly weird, and I think it'll just be very weird.

Speaker 1:

Do you think space data centers will provide a meaningful amount of compute for OpenAI in the next two to three years? No. Five years? No. Ten years?

Speaker 2:

You just keep going. Ten thousand years.

Speaker 3:

I wish Ethan luck.

Speaker 2:

Okay. The the the funny thing about the whole, like, back and forth about ads is that, in world, the criticism is that you didn't launch ads early enough. Is there a world where you wish you launched earlier? How is the actual rollout going? Are advertisers happy?

Speaker 2:

Do you have, like, a really long road map? Or do you think you'll be faster at catching up to sort of what's frontier in ad

Speaker 3:

We haven't started the test yet. We started the test soon. Yeah. But, you know, we're gonna it's gonna take us some number of iterations to figure out the right ad unit, the right kind of the right way this all works. Do I wish we had started earlier?

Speaker 3:

We have gone from, like, not a company, you know, three years and three months ago or something like that. We were like a research lab. Mhmm. And now we are like a pretty big company with a lot of products. So there's many things I wish we had done faster.

Speaker 3:

I think we were correct on the trade off here of how we balance things that we need to do. Mhmm. I wish, you know, we launched this very cool enterprise platform this morning. Yeah. Wish we had done that earlier too, but I'd like to deal with the monstrous growth of Chateappity and Codex and all sort of stuff.

Speaker 2:

Good problems to have. Last question for me. What happened to that internal writing model that you used to write the essay? That feels like something that was really cool, but we never really saw the light of day.

Speaker 3:

We're gonna get a lot of that spirit into a future model. Again, it's like, there's so much stuff happening. Yeah. We have to make these hard prioritization decisions. Sure.

Speaker 3:

I would love a cool writing model. Not as much as I would love a cool coding model. Makes sense. And it's what is possible now for coding, for science, that's the thing I'm most excited about

Speaker 2:

Mhmm.

Speaker 3:

For accelerating all kinds of research, AI and otherwise, for really accelerating the economy. I think that's, like, the right thing for us to most prioritize in terms of new capabilities. But yeah. Well, like, you want you want a model that can write beautifully because it means it well, you wanna write a a model that can write beautifully if it can also think very clearly and express that very clearly. That's just useful in normal work.

Speaker 2:

Yeah. That makes a ton of sense.

Speaker 1:

Last question for me. How how have conversations been with the broad OpenAI leadership team? You guys are in a position where any single word or sentence you say in any situation can be spun into a headline immediately, and then you guys have to go on damage control, kind of correcting the narrative. But, of course, the original message is often or at least the original news is often seen more broadly than the than the correction. And it seems like an interesting challenge.

Speaker 3:

It it is a strange way to live. It and I don't like I don't know of any private company that has ever, like, been so in the news and so under a microscope. And it you know, at some level, it's frustrating and, you know, we're so squarely in the sights of everybody's anxieties and every competitor trying to take us down and everybody's like, just what is gonna happen with AI to their part of the business or their own lives, that there's, like, a lot of plasma looking for an instability to collapse on. In some other sense, though, the the subjective experience of it is we are so busy on so much exciting stuff that it often feels like there's this crazy hurricane turning around us. And when we sit, it's, like, fairly calm.

Speaker 3:

You know, the media or Twitter goes insane about something one day. They're talking about a crazy meltdown. We're like, that is insane. Like, okay. And people talk about it all day and then later find out it's wrong and sort of seemed like a lot of wasted energy, but we're just like, we have this great new model coming.

Speaker 3:

People are building incredible stuff. Companies are transforming. We're trying to, like, figure out how to get more compute and deal with this compute crunch. And we just kind of, like, keep going, and we're busy. And then if we, like, open Twitter, pop up our heads, and look at the news, it's like, wow.

Speaker 3:

That is an insane crazy thing happening. Completely divorced from reality or 99% divorced from reality. And, like, okay, someone will correct it, but then we get back to work and people flip out again.

Speaker 2:

Right.

Speaker 3:

It's it is much it is weird to watch when we look outside, but it is it is less chaotic internally than I think you would imagine from reading the media reports.

Speaker 2:

Yep. Makes sense. Well, thank you so much for taking the time to come chat with us. Congrats

Speaker 3:

on And the launch of your

Speaker 1:

I'm excited to see the Codex ad.

Speaker 2:

Me too.

Speaker 3:

Please try it. The is the app and five three have been, like, I think, the coolest thing we've done in a while.

Speaker 2:

Yes. With one prompt, I rebuilt the tbpn.com homepage to look exactly like Berkshire Hathaway, and it was just immediate. It was very fun.

Speaker 3:

Interesting choice.

Speaker 2:

Plain text. It was very easy. We immediately one shot it. Did not really push it to its limits, but I'm having fun. So thank you so much for coming on the show.

Speaker 2:

We'll talk to you soon.

Speaker 3:

Thank you.

Speaker 1:

Great to catch up. Goodbye. Cheers.