TBPN

A rare live interview with OpenAI CEO Sam Altman and Sora head Bill Peebles. John and Jordi talk through the first 10 days of the Sora app, how AI will change advertising, and what Sora means for Hollywood.

TBPN.com is made possible by: 
Ramp - https://ramp.com
Figma - https://figma.com
Vanta - https://vanta.com
Linear - https://linear.app
Eight Sleep - https://eightsleep.com/tbpn
Wander - https://wander.com/tbpn
Public - https://public.com
AdQuick - https://adquick.com
Bezel - https://getbezel.com 
Numeral - https://www.numeralhq.com
Polymarket - https://polymarket.com
Attio - https://attio.com/tbpn
Fin - https://fin.ai/tbpn
Graphite - https://graphite.dev
Restream - https://restream.io
Profound - https://tryprofound.com
Julius AI - https://julius.ai
turbopuffer - https://turbopuffer.com
fal - https://fal.ai
Privy - https://www.privy.io
Cognition - https://cognition.ai
Gemini - https://gemini.google.com

Follow TBPN: 
https://TBPN.com
https://x.com/tbpn
https://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231
https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235
https://www.youtube.com/@TBPNLive

What is TBPN?

Technology's daily show (formerly the Technology Brothers Podcast). Streaming live on X and YouTube from 11 - 2 PM PST Monday - Friday. Available on X, Apple, Spotify, and YouTube.

Speaker 1:

And we are joined by Sam Altman and Bill Peoples. Sam, Bill, how are you doing?

Speaker 2:

What's going on? Hey, guys. Hey, guys.

Speaker 1:

Great to see you. On all the progress. I've been enjoying Sora a ton. Personally, I've been enjoying making them. I had a ton of fun making, the collab post yesterday, and, I was wondering prompting your

Speaker 3:

your cameo feature. John made it so that, he always appears as a bodybuilder if anybody is cameoing him. So you guys got got to experience

Speaker 1:

led to some some chaotic results. Do you have favorites Sora posts that you've been, coming back to or that have, you know, stuck out to you as particularly, you know, creative uses?

Speaker 4:

I I mean, definitely all of the ones of, like, me stealing GPUs or doing other crazy things to get GPUs have been funny. In in the last few days, the there at least in my feed, have been these, like, very beautiful sort of fantastic scenes that are just not things that could have ever existed without something like Sora or wouldn't have been easy to make. And watching people build those and watching the sort of the trends flow through that has been pretty awesome.

Speaker 1:

What about you, Bill? Any favorite uses of Sora so far?

Speaker 2:

Oh, man. Mark Cuban came on the platform a few days ago and there have been some hilarious Shark Tank memes. Those are probably my favorite. Pitching Awesome. Pitching some Sora features to

Speaker 1:

Mark. Yeah.

Speaker 3:

Also, leveraging the the prompting function to always include an ad for cost plus drugs, I thought was especially hilarious considering he's been one of the most vocal opponents of of advertising in AI. He is leveraging the feature to the max. Yeah.

Speaker 4:

Yeah. I think they're gonna be all of these weird new dynamics that we see emerge that just weren't possible in previous kinds of video. This is like a fun period because it's all gonna be so different every few days. Man, I'm watching this Polymarket ticket go by ticker go by and so tempting to, like, say things to

Speaker 1:

us all.

Speaker 3:

Yeah. Yeah. To be clear, not we're no. Don't don't worry about the don't worry about the ticker. I don't think we're including I don't I don't think any of those markets are being being featured in the ticker.

Speaker 3:

But

Speaker 1:

Yes.

Speaker 3:

Yeah. Again, this is the the new world we're in.

Speaker 1:

Yeah. You can move the market live on TBPN But today very odd.

Speaker 3:

People, I'm sure, will be happy or disappointed. We're here to talk about Sora,

Speaker 1:

of course. Yes.

Speaker 3:

So none of the other topics.

Speaker 1:

But I I mean, I also wanna know about ads. Why no ads in Sora on day one? I feel like you've laid out a really great, you know, mental model for how you think about ads on Strathecory, on the Andreessen Horowitz podcast. I've I've I'm bought in. Is it a technical thing?

Speaker 1:

Do you need scale? Do you need, to think about it more? Why no ads on day one?

Speaker 4:

This is like a ten day old product. Right? Like, it's hard to get anything to work at all. And we we, like we we don't we we don't assume success. We we gotta, like, go hard earned success, and then we can then we can think about monetization for it.

Speaker 4:

But this is like it's gone great so far. It's still very early, and there's still a lot of work to build something that a lot of people are gonna love first.

Speaker 1:

What about surprising capabilities of the model? You mentioned that you've seen some fantastical scenes. I'm interested to know about specific, like, specific breakthroughs that you've noticed that Sora to the model is particularly good at. I noticed one about, reflections being great. Obviously, people love the cameos.

Speaker 1:

But what what has surprised you in terms of just, like, technically, the model can do something now that it couldn't do before?

Speaker 2:

This model is a huge leap forward in terms of Physics IQ. So pretty much all past video generation models really struggled with prompts that, you know, involve, like, backflips, gymnastics routines, etcetera. Yeah. That's right. And this is really the only model that exists today which can reliably handle these kinds of really complicated dynamics.

Speaker 2:

One of the big features that people have really loved on the app so far is the steerability of the model. So, you know, if you give it, like, a really simple text prompt that's maybe even only a few words, this model is really good about kind of telling a coherent story with, a beginning, middle, and end and doing this, like, automatically in a way that doesn't require, like, a lot of direct steering from the user. If you wanna, like, go into, you know, a ton of detail about exactly how your prompt should be laid out and how the story should unfold, It supports that too, so it can kinda meet you wherever you're at in the creative process. But really, this model is just, like, so hyper steerable. And it's, like, just vastly higher physics IQ just makes it able to do things that were, like, not possible a few months ago.

Speaker 1:

Is that all within the model or is there some sort of, like, reasoning step where you're, hydrating or unpacking my prompt and writing a bigger prompt or breaking down the problem in some way? Can you share anything about that?

Speaker 2:

Yeah. It's a good question. So, you know, the intelligence for these text conditional video models kinda lies both in the the core model itself like Sora, and some amount of it also comes in through the text prompt. So, you know, where however the user decides to kick start a prompt, you can have, like, a language model under the hood, add some details in. But, for example, you know, when it comes to things like, again, like, doing these backflips or any kind of physical interactions, how refraction is modeled, you know, when you're pouring water into a glass, All of these details have to be captured by the core video model itself.

Speaker 2:

So that that's intelligence, which is really innate to Sora. And, certainly, you can supplement it with intelligence from a language model as well, but it's not necessarily a prerequisite to get kind of amazing results out of these things.

Speaker 1:

Are there any areas on the physics where you think that, the model falls down and you wanna improve? I mean, we went through the era of, like, six fingers. It seems like Yeah. Reflections of water are solved, but someone was saying something about doors being hard. Or I I haven't noticed that one personally, but a lot of the stuff's great.

Speaker 1:

But what what have you noticed is, like, the next version's gonna be even better at?

Speaker 4:

This is still very early. Yeah. A thing Bill said that I appreciated is this is this is the this is like the GPT 3.5 moment per video.

Speaker 1:

I agree.

Speaker 4:

And if you went back to use the actual GPT 3.5, you'd be like, okay. Signs of great promise can do the occasional impressive thing, but it was really not until GPT four where these text models started providing real value for people. And we know how to go make the g p t four equivalent of video models, and we will do that. And then a lot of these things that are currently annoying, like doors or, you know, once in a while, something goes through something else it's not supposed to. In the same way that the world, you know, loved to complain for a brief period of time about where 3.5 fell down and, oh, it's never gonna be useful.

Speaker 4:

It's never gonna do this. It's never gonna do that. And then we were able to just keep making it better and better and better and better. The model Physics IQ is certainly the best I've ever seen, but it is nowhere near as good as it will be in the future versions. And I think I hope we'll see a similar thing to what happened with the GPT text models, which is people will always demand more and better, and they will always find new and better things to use it for, and the world will just make ever more amazing videos.

Speaker 3:

And How quickly know?

Speaker 1:

Oh, sorry.

Speaker 2:

We're so early on the test for for video here.

Speaker 1:

Yeah.

Speaker 2:

Like, GPT one really was sore one for this modality. And the progress we've made kinda in the last eighteen months came to this 3.5 moment, right, it's really compressed compared to how long it took to go from GPT one to 3.5 in the language domain. So we're really expecting progress to continue to be meteoric here in the near future.

Speaker 3:

How quickly do you expect the Cameo feature to be cloned? That feels like a

Speaker 4:

It's a good point.

Speaker 3:

Equally important part of the, you know, the the models made a leap but the product is in the experience and the the the experience of creating these assets is wildly innovative. We saw stories get cloned. We saw saw, you know, algo video short form feeds get cloned. I I expect many other platforms to be looking at this functionality and realizing that this might be, the future. You guys certainly believe, that that it could be, important.

Speaker 3:

So how quickly do

Speaker 4:

you think? We're actually totally okay with a world where we do the product innovation and everybody else copies, and I don't think it works for them as well as they think it does. Like, the the you know, a lot of people have tried copying ChatGPT, and you can go look at some of our competitors' apps, and they even copy the mistakes. They even copy the design decisions we really wish we hadn't made. And maybe it's worked well for them.

Speaker 4:

I guess I kinda hope it has, but it's been fine for us. Yeah. I I think, like, the the the the key to this is not any one innovation, but it's repeatedly putting them out again and again and being first to come up with them and put them into a cohesive offering. And, you know, that's what we wanna be good at. And if other people wanna clone the stuff that works, we also sometimes clone stuff that works.

Speaker 4:

That's fine. But but mostly, we wanna be able to drive the innovation. And I think Bill and his team have done an incredible job of figuring out how people actually wanna use these video models, what the models need to do. Really, they've approached this as a full stack problem from how do you train the video model to how do you make this enjoyable for users. But cameos are one out of many ideas they have from here on their journey to, like, the product that we hope to eventually build.

Speaker 4:

And so if people take some inspiration from us and copy us along the way, I'm sure they will. It's fine.

Speaker 1:

How do you think about the, like, popular claim that we want AI detection? I want AI content flagged. Is that a stated preference that's not a revealed preference? Because, personally, I don't want bad AI content, but I don't want bad human made content either. I want great both and I'm fine when someone comes up with something genius and they instantiate it with a video model.

Speaker 1:

How do you think about it?

Speaker 4:

I I think that is the real thing is you don't want slop. You want great content. Yep. Different people, one man's slop is another man's treasure for sure. Yes.

Speaker 4:

But what you care about is, like, good, original, thoughtful, new, helpful, whatever content. And whether that is generated entirely by a human or entirely by AI or what I expect will mostly happen in the future, which is tool assisted human driven generation, I don't think you care that much if the content is great. There's a lot of, like, you know, stuff that is technically written or drawn or filmed by a human, but is completely derivative and much less original than what an AI has generated. And I think that will be what people really care about long term. You just want great content.

Speaker 4:

Now I also do want some human connection with it. When I read a great book, first thing I wanna do is read about the author that wrote it and what life experience went into that. I don't think that'll go away. But if they're using an AI as a tool to help them make the writing better, sign me up. That sounds great.

Speaker 4:

Similarly, I would rather watch a video about someone I know than some random AI generated character, which is part of why think this was cool to offer. One design decision the team made that I thought was really great, and I I was actually pushing them in a different direction earlier on, and then I decided they were totally right, and I thanked them and dropped it, was the fact that the feed is AI only and not a mix of AI plus some uploaded videos, I think, is a subtle but extremely important design decision in how people are relating to this.

Speaker 1:

Yeah. It was a very weird experience for me. I was I was thinking about the collab post that I was making, announcing this interview, and my initial thing was like, well, I'm gonna have to think of a script, or I'm gonna have to think of, you know, what I say, or I should record a piece of this, and then I'll use it. And it was like, no. I just took the prompt, and then I get the front facing video.

Speaker 1:

It's remarkable.

Speaker 3:

What are you what kind of indicators are are you guys looking at as Sora can transition from what it was the second it launched, which was a creative tool, into something that's more of a consum like a consumption platform, traditional, you know, social media platform? Like, talk about kind of what you guys are pushing forward to because obviously, you're seeding seeding the the network with with the tool, but it's it's certainly much harder to turn it into something that people are spending hours a day in purely consuming content and not creating content.

Speaker 2:

You know, we really wanted to design this from the ground up to be centered around creation. And a lot of the metrics that we've been focused on optimizing here are really aligned with making sure as many people as possible are actually, like, getting their hands on the SOAR two model itself and, you know, able to create content with their friends and, like, for the rest of the world. One metric that we're really proud of with this launch so far is that 70% of our users are actually creating content even to this day, you know, a week and a half after launch. And that's, like, vastly higher than on any other social media platform. And I think it really speaks to just how fun creation can be with the right tool set.

Speaker 2:

Right? If you look at any of these kind of legacy platforms, there's just, like, so much friction from, like, getting off the feed and, like, into some creative flow state. Right? You have to, like, put the phone down. You have to go get, like, a camcorder, start recording yourself, find your friends, like do a dance, etcetera.

Speaker 2:

It's just like a lot of work. Right? On Sora, like you can just pick up your phone, find a like any video you like in the feed, remix it, you know, Cameo any of your friends. And I think one insight that was not obvious to us at first, but we've kind of clearly seen as an emergent behavior of this product, is just like there's there's all these people out there who would not necessarily want to be, like, you know, influencers or something or have, a big social media presence. But the fact that, like, all of their friends can just access their Cameo, right, put them in all of these crazy situations, actually, like, kinda gets them into the playing field in a way that felt really high friction before.

Speaker 2:

And so, you know, we're closing in on close to, like, 2,000,000 weekly active users now. We're really excited that such a huge percentage of that user base to this day is, like, still creating with Sora, and we're gonna continue pushing on that direction and making sure people have even more powerful tools in the future.

Speaker 1:

Yeah. So 70% of Sora users are creating content. The typical benchmark that people kind of quote randomly is, like, 1% creation, 99% consumption, something like that. And that certainly feels like my experience on Instagram. I post a photo every once in a while, but most of the time, I'm just kind of scrolling.

Speaker 1:

And I'm wondering if you think that that 1% will be much higher on Sora in terms of actual time in the app, time prompting versus time scrolling. If you have any data, that'd be super interesting. But then, also, does that make it more of, a competitor to video games than traditional social media because it's such a lean forward experience versus just lay back? What do you think?

Speaker 2:

Yeah. It's a great question. We still need to study this more exactly how creation versus consumption habits kind of change over time for folks on the platform. It's still pretty early days. I I do agree with your point, though, that I think over time, this is going to feel much more immersive, in a way that, like, video games kind of do.

Speaker 2:

Like, you have more agency when you're actually using the platform, you know, not just kinda, like, mindlessly scrolling a feed, like, hours a day. And, like, one interpretation of this product, which I think is kind of interesting, especially from the research perspective, right, is Cameo's in some way is, like, the simplest way where you can kind of, like, inject yourself into the model. Right? So it it's a very low bandwidth communication channel right now. You know?

Speaker 2:

You're only giving, like, a few seconds of video footage of, like, any given individual, like, into the app. But, like, over time, right, you can imagine, like, these models know more and more about your life. They really, like, deeply understand your friends, how you wanna, like, show up in the world. And, like, over time, this can almost become, like, a little mini, like, alternate reality. Right?

Speaker 2:

So, like, you're not just generating, like, videos of yourself with your friends. Like, you actually just have, like, digital copies of yourself running in the model on the Sora platform interacting with other people with agency. And so I think over time, we're really gonna see this platform evolve into, you know, something that feels kind of familiar today and just something that really leans into, like, the full intelligence of Sora two in the future and, like, really leverages all of the world simulation capabilities that we're working on internally.

Speaker 4:

Yeah. I I would add to that that if if you think of this, like, spectrum of the kind of entertainment you can have in front of a computer, At one end, you have, like, watch a two and a half hour movie and you hit play, and then you lean back and you don't do anything at all. And then at the other end, you have, like, a very intense video game and you're, like, you know, sweating and your heart's racing and it's, like, super, super active. AI is gonna push things to be more in between there. So you'll have maybe you're still watching that movie, but now you can, like, say something a few times throughout the course of it, and it changes what happens as the movie plays out.

Speaker 4:

Or with Sora, you're seeing this amazing new phenomenon where most users are creating in a in a world where traditionally only 1% of them did. And so you're yes, you're like watching a video feed, but you're you're doing a little bit more. And it, at least for me, really changes how fun the whole thing is and how I feel about it, then maybe you'll do what Bill said, and you'll have, like you'll be way more actively participating in the Sora feed. And I I think you're just gonna see that continuum blur a lot more.

Speaker 1:

Did you see Bandersnatch by any chance, Sam? Have you seen this Netflix? It's like a Netflix choose your own adventure. And it was a really cool idea, but ultimately, people it never really took off and became like something they do again and again and again. And I'm wondering if it was because it was like not customizable enough or people just want to just sit back and see a director's vision.

Speaker 1:

I don't know.

Speaker 4:

Anyway I never heard of that, but it sounds cool.

Speaker 3:

Yeah. How do you think question for Sam. How do you think about allocate allocating compute to Sora versus the rest of the business? I imagine Bill is constantly constantly in in your your ear. Ear Always.

Speaker 3:

Every every other hour. But how are you thinking about it?

Speaker 4:

You know, my real answer is I've entirely changed my focus of how I spend my days to just go get more compute Mhmm. Rather than have to make the compute allocation decisions. Yeah. I still do have to make some compute allocation decisions, but I hope we are heading to a world where I am instead telling people you gotta find a way to use more compute, and we're gonna be we're gonna be very aggressive here.

Speaker 1:

It feels like you're doing a great job of, like, bringing things within your control within the supply chain. What is outside of your control at this point?

Speaker 4:

I mean, most of it.

Speaker 1:

But I feel like you have great you have great partners all up and down the stack, multiple partners in different parts of the chain. Like, when I think about scaling up Sora, I I I I feel like it's crazy to bet against you. Like, you're gonna you're gonna get the chips. You're you're not gonna be GPU whole.

Speaker 4:

Try to buy, like, 10 gigawatts an hour for delivery next year. It's not so easy.

Speaker 1:

It's funny. How are the

Speaker 3:

conversations going with with Hollywood?

Speaker 1:

Oh, yeah.

Speaker 4:

Oh. Actually Yeah. You take it.

Speaker 2:

Yeah. I I was gonna say we've been chatting actually with a a few, you know, very notable folks in in Hollywood over the the last week. Know, I think people's first reaction to this is, like, very understandably going to involve a lot of trepidation and, like, anxiety. When we've gotten to just sit in a room with these folks though, you know, and really explain what we're building, I've actually been pretty struck by, like, how excited folks in Hollywood are about this. You know, we were chatting with with one actor recently who mentioned that, you know, on Twitter, like, a year ago, saw, like, a deep fake of her generated with one of these, like, open source models, which really had, like, a lot of nasty content

Speaker 1:

Oh, yeah.

Speaker 2:

Created. And when we really, like we've walked her through kind of all of our safety mitigations, right, how we're making sure that we have this, like, very well defined model spec, which dictates the behavior that that we allow on this platform, and how we are really leaning into, like, full control of likeness, right, more so than any other platform. Like, you have to come in through the Cameo process. You can't just, like, upload an image of yourself and just, like, generate a video of it of, like, any person. You have to come in through Cameo.

Speaker 2:

I think it became clear here that, you know, we're really setting the right standard here in terms of making sure people are in full control of their likeness in Hollywood. Think that's where, like, a lot of this anxiety comes from. Right? It's this feeling that, you know, some random person can just kind of take videos or images of you and do whatever they want with them and create all of this, like like like, terrible content that's, like, outside of your purview. But we've really been, like, designing Sora from the ground up to put users in full control of their likeness end to end.

Speaker 2:

From the moment you sign into the app to, you know, needing Cameo permissions to, like, access any of your friends' generations. So, you know, I think we need to engage more with Hollywood, and we're gonna continue to do that. But once we really explain the story of Sora, you know, they're very receptive to it. Do you think there's a world

Speaker 4:

of something to that. I like, you know, I the team asked me before launch if they could put my Cameo in their open access, and I, of course, thought about it for a second and said, absolutely yes. I had all these Hollywood celebrities then messaging me on the first day being like, you're absolutely crazy. This is insane. This is, like, the dumbest thing I've ever said.

Speaker 4:

And then by about the third day, they were like, that was really smart. You got, like, a lot of, you know, free publicity. Maybe we need to be doing that. And I think you're now seeing actual celebrities say, okay. I'm gonna do this, and I expect a lot more of them will.

Speaker 4:

Similar thing on other kinds of characters in IP. I can totally imagine a world where our problem in a year or six months or maybe even less is not that people don't want their cameos or their characters appearing, but they think we are not fairly having their characters or cameo appearing often enough.

Speaker 1:

Yeah.

Speaker 4:

This may turn out to be a really big thing for fan connection. Now, it may be that kind of the previous generation of celebrities don't wanna do this and the influencer celebrities all do. I don't know how that's gonna go, but but I bet this will be, like, a pretty deep kind of new connection.

Speaker 1:

Yeah. It seems like it's been good for DiCaprio in the memes. Like, he's not directly monetizing those when you show the champagne meme or him pointing at the TV, but, like, you know, it builds his aura in some way.

Speaker 3:

A friend of ours posted something yesterday. This this is Jeremy Gaffani. He said, the reason we're so upset about slop is because it's obvious we're all going to be going to love consuming it in two to three years. It's not gonna be slop for long. Do you agree, Sam?

Speaker 4:

I mean, some of it will be slopped to some people, and some of it won't. I I remember, like, there was a real reaction like this in the early GPT days where people were like, I can't believe anyone reads this. It's, like, total crap. It's full of hallucinations. You know?

Speaker 4:

It's, like, it's not useful to anyone. And then it became more useful to some people, but they said, I can't believe anybody, like, ever thinks this thing writes a beautiful sentence. That's insane. Then And with GPT five, you have authors saying, like, wow. This is a useful tool.

Speaker 4:

It sometimes, like, writes a beautiful sentence.

Speaker 1:

Yeah.

Speaker 4:

And I kinda think it'll follow a similar trajectory.

Speaker 1:

What do you think about the fact that people feel, at least I don't know if they actually can, but it feels like you can still clock GPT five writing, you know, it's not this, it's that, the em dash. Like, will will we still see these artifacts in three years in Sora five that people are like, oh, if you know, you know, you can tell, but most people can't?

Speaker 3:

Yeah. It's like what's the em dash of of

Speaker 1:

A video.

Speaker 3:

Video because I don't think it's like six finger.

Speaker 1:

No. No. Definitely not. That's the typo, which doesn't happen anymore.

Speaker 2:

Yeah. I think right now the em dash is like this like slightly wired speech pattern and so on where it likes to say a lot of words very quickly. You know, these these generations definitely have like a style to them. Mhmm. I think analogously to GPT, we really wanna give users a lot of control over exactly how their videos show up, right, on the platform.

Speaker 2:

Like, if you really want kind of, like, a very soothing experience, right, not a lot of shot changes going on, we wanna give users the ability to generate that. We're gonna continue to give more optionality to people. So, you know, there'll be some default kind of behaviors and quirks of of Sora for sure, but we definitely want our whole power users to be able to be in full control.

Speaker 1:

Random question. Where did the name Sora come from?

Speaker 2:

Yeah. This is a fun one. So the original Sora came out in February 2024, the OG blog post.

Speaker 4:

Yeah.

Speaker 2:

We did not have a name for it, I think, like, up to two days before we, like, revealed the model to the world. We just could not agree on the team what it should be. Did you at

Speaker 1:

least have a code word or something?

Speaker 2:

Like, how did We just you called it like video gen.

Speaker 1:

Okay.

Speaker 2:

And so at some ungodly hour, I, like, just started pumping a bunch of crazy ideas into chat GPT. And then, like, we basically ran out of, like,

Speaker 1:

English words.

Speaker 2:

Then we switched, like, Japanese words. Wow. And then Sora came out. I was like, wow. That sounds really nice.

Speaker 2:

It means sky, you know, linked with, like, imagination, like, all the the possibilities of creation. And so then we just, like, last minute ship Sora.

Speaker 1:

So Yeah.

Speaker 2:

Yeah. It's kind of a mad dash.

Speaker 1:

Okay. Speaking of Japanese stuff, Sam, you said you were looking for a Acura NSX a while back. It's kind of this throwback car, very it's not a Waymo. What do you think the piece of content or format will be that remains loved in an age where everyone's taking the Waymo of video, the Sora video generation? What do you think is like

Speaker 4:

Well, first of all, I got that NSX, and it lived up to all of the childhood hype. I mean, incredible. Amazing. That car is so fantastic. That's great.

Speaker 4:

And I I don't know. I kind of think there's gonna be a lot of stuff like that for people that generated or not, where you still you want the real thing. You want the thing that you have the kind of childhood connection to. You know, someone like a kid today is not gonna want the NSX, but whatever the a cool car like that is, they will want. And at some point, like, the fact that they can have, like, a crazy VR experience, they'll still want the real thing and the connection to it and everything they have.

Speaker 4:

So I I think there will be a huge amount of that. In fact, I think the future looks like much more of that kind of stuff, not much less.

Speaker 3:

How quickly do you wanna create an economy on Sora? It feels like there would be a a number of ways that you could create incentives for creators to create things for IP holders, or individuals to just be passively monetizing their likeness. Bill,

Speaker 4:

what do you think for timing on that?

Speaker 2:

I mean, this is like a top priority for the team. You know? There's clearly such an incredible value proposition for celebrities, for rights holders across the board here. We think Cameo is, like, a great entry point for this. Right?

Speaker 2:

You can imagine right now, we have Cameos for people. Maybe you have Cameos for, like, you know, your character, uh-huh, or, like, your brand or something. And so we're actively working on the team right now coming up with, like, the right monetization model here to get this rolled out. But it's really important to us, right, that our creators on the platform are rewarded and that there are clear, you know, financial incentives for, like, the incredible work that they're already doing. Yeah.

Speaker 2:

So this is, like, top of mind for us, and we'll have updates here over the coming weeks. This is, like, something we're actively working on.

Speaker 4:

I I I will I I think it's super important and awesome. I I will say I I would like to know how many hours of sleep Bill has averaged for the last few weeks, but I bet it's not enough. So we got a lot of stuff. The team's got a lot of stuff they have to do in a short period of time, and it's gonna take a little while.

Speaker 1:

K. Let me put one more thing on your plate, Sam. I mean, earlier, like, years ago, you built Looped, location based product. Have you thought about how AI and location based content fits together? Like, on most of these social apps, you can tag a location.

Speaker 1:

It that wouldn't even make sense in the current Soar app, but what does the AI maps product look like?

Speaker 4:

I haven't thought about AI and location that much, but I've thought about, like, how AI can really change the social experience for people. Mhmm. We don't have, like, a for sure answer yet, but we have, like, a lot of interesting threads to pull on. And I have thought back to, like, my days running that startup there more. My instinct is it is possible to make a very interesting new kind of social experience connecting you to people, helping you find people that is intermediated by AI in an interesting way.

Speaker 4:

But, you know, we'd have a lot of exploration to do there.

Speaker 1:

What advice are you giving to startup founders these days? I remember in the GPT three point five, GPT four days, it was like, don't build a company that assumes model stagnation. How do you think about in the age of sore

Speaker 3:

That's been really great advice.

Speaker 1:

It really has. It planned out it played played out exactly like that. There's a bunch of great companies that aren't built that way, and they've done great. But if you if you were just, oh, I have a special prompt that tunes up GPT four, yeah, bad times. But how are you thinking about it now in the context of video and Sora specifically?

Speaker 1:

You obviously do have an API. You have Dev Day. There's people that will build on top of this. Is it a different shape of the problem?

Speaker 4:

Totally. The the reaction to the API has been nuts positive. Like, I I at least the fastest ramping revenue I've ever seen for one of our new models in the API. I mean, maybe there was something faster than I'm not remembering,

Speaker 1:

but Congratulations.

Speaker 4:

The demand there has been just incredible, and people are doing awesome stuff with it. Bill and I have not had a chance for a one on one since launch because it's been so crazy. We're doing one later. We're doing one later today. But one of things I was missing suggest to him was that we given how much excitement there is to build on this stuff, that we do something we don't usually do and put out our intended road map of the things we're gonna prioritize.

Speaker 4:

Because I can imagine really cool new startups that simply were not possible, that will be possible at each of these new things we'll ship.

Speaker 3:

So I I I had a question when you guys released the Sora two via API, which was that if Sora has the potential to be a Instagram or or YouTube scale business, why release part of your edge for the entire world that they can integrate into other creative tools and then use Yeah. The model to generate content that doesn't have a watermark, that's not in your feed, that you're not able to get that feedback loop on that you guys do with the Soar app.

Speaker 4:

For ChatGPT, we also put out a great model in the API. Yeah. And people can theoretically compete with us on ChatGPT and some try to. But, like, we are willing we're never gonna build every cool use of the technology, and we want the world to get all that stuff. We're delighted to also get paid on people using our API, but, like, we just want AI to flourish out in the world.

Speaker 4:

We're not gonna build every great use of what you can do with video models either. We'll build one, and I think it's pretty awesome. But people have a lot of other ideas of of business and products to go build, and we'd like to enable those.

Speaker 1:

Okay. Last question. Back to cars. What's wrong with the Porsche nine eleven?

Speaker 3:

Yeah. You said earlier the timeline was in turmoil. You said if you were worth somebody said If you're worth $9.11, you said no. You agreed with PG. What what did you mean by that?

Speaker 3:

I

Speaker 4:

mean, was maybe it was like a tasteless joke. It was kind of like late at night. I was, you know, whatever. But I I have an unfortunate proclivity for expensive cars. Yes.

Speaker 4:

And and the response was like, would you ever spend 250 k on a car? And I took that literally.

Speaker 1:

That's amazing.

Speaker 3:

Just a size gunk for taking it literally. No no time for 250 k cars.

Speaker 4:

Not Congratulations. Probably that was not my best tweet. You know

Speaker 1:

what was I I I We enjoyed it. Enjoy it now.

Speaker 4:

We we all enjoyed it.

Speaker 1:

I enjoy it now. I that I have the context. Congratulations on all the progress to both of you. Thank you so much for taking the time to stop by the show. Really appreciate the update.

Speaker 1:

And very excited to see where this goes. Thank you

Speaker 4:

so much. Thank you.

Speaker 1:

We'll talk to you soon. Bye.