TBPN is a live tech talk show hosted by John Coogan and Jordi Hays, streaming weekdays from 11–2 PT on X and YouTube, with full episodes posted to Spotify immediately after airing.
Described by The New York Times as “Silicon Valley’s newest obsession,” TBPN has interviewed Mark Zuckerberg, Sam Altman, Mark Cuban, and Satya Nadella. Diet TBPN delivers the best moments from each episode in under 30 minutes.
It's the year of the fireworks.
Speaker 2:And and we get to celebrate it twice.
Speaker 1:We because
Speaker 2:I think we I remember we talked about it at the beginning of the year Yes. But the Chinese New Year did not start until today.
Speaker 1:It's Lunar New Year. Right? Worshippers burned large incense sticks on Monday outside a temple in Hong Kong to mark the Lunar New Year There we go. Which falls on Tuesday. People around Asia celebrate the start of the year of the horse.
Speaker 2:In the UltraDome, it's the year of the horse
Speaker 1:every year. Every year, I think so. Everyone is talking about the Cournot equilibrium, at least Dario, Amade, and Dorkash Patel are
Speaker 2:And you.
Speaker 1:And me, and some folks on the timeline. We're going back and forth. And basically trying to get to this question of
Speaker 2:I feel like we should give the context on this on the titling strategy Oh, yes. Because we call the run of shit we're like, we'll title an essay, Why is No One Talking About the Cournot Equilibrium? Because this one guy that had this super viral Yes. Year or two ago, And he's and the title was, why is no one talking about Marc Andreessen? And we were laughing about it so much because he's one of the most talked about Oh, yeah.
Speaker 2:Investors Yeah. In venture. He's ever
Speaker 1:Minus List constantly. He's written Every essays and
Speaker 2:talking about,
Speaker 1:like been viral million times. Someone that many bots.
Speaker 2:Yeah. Someone that everyone in the industry has an opinion on Totally. Already.
Speaker 1:Totally.
Speaker 2:He's not like a Midas lister
Speaker 1:Yeah. Yeah. Yeah. Not a lot
Speaker 2:of people. It's just like everyone's talking about it. But in this
Speaker 1:case The basic idea is that if there's only a few players in a given market, you can think about, you know, any any specific market, lemonade stands or whatever, and they aren't competing on price, they will compete on supply. And they'll try to predict what their competitors are doing and then respond accordingly. And this is really, really relevant to the AI lab discussion because you can tell that even though all the leaders of the AI lab say, I don't think about the competition, I don't talk about the competition, All to use general terms, they're all obsessed with what everyone else is doing and they think about it constantly very clearly. And if someone's buying $10,000,000,000 of compute over here, they're going to counter with 8,000,000,000 over there or try and jump to 12,000,000,000 And everyone's sort of keying off of each other. You know, Microsoft pauses.
Speaker 1:AWS goes all in. There's all these, like, horse races. It's why semi analysis exists and provides great, you know, cross functional data. Outside of tech, there's this there's this discussion that I see that's always funny to me where people would be like, the price to earnings ratio
Speaker 2:Brian says they don't want you to talk about
Speaker 1:the They don't want us to talk about it. They don't want us to talk about it for sure. There's this discussion. I I saw him very since saying, the price to earnings ratio for OpenAI and Anthropic is just simply too high. And I was like, earnings?
Speaker 1:Like, these companies are losing money. They don't have a price to earnings ratio. Divide by zero, gonna
Speaker 2:blow your mind.
Speaker 1:Yeah. It's it's so much worse than you think. Right? They're not making any money. The other side of things is the inference factory.
Speaker 1:So this is essentially a manufacturing business. You have variable costs, so GPUs, power, engineering overhead, and then your revenue is subscriptions, API usage, and enterprise contracts. And so when you just look at inference, you see positive contribution margin. And we can see that because we can compare the cost to inference a model of the GPT-five class size or the OPUS 4.5 size. You can see what does it look like to run an open source version of that model on commodity hardware?
Speaker 1:It's way, way cheaper than what you pay to Anthropic or OpenAI, so they must have good margins. And everyone sort of agrees at this point that inference margins are in fact healthy. The question is, how do you balance those two pieces and when do you risk overinvesting? That's sort of this Cournot game of chicken that everyone's playing. The Cournot equilibrium comes when a small number of labs, an oligopoly, effectively choose supply at the frontier level and and then the market clears at a high price for frontier access.
Speaker 1:So choosing supply in this case means how many data centers get built, how many GPUs get ordered, but also how much low latency capacity is allocated to the top tier. Right now, they just OpenAI just did the Cerberus deal, there's Claude Fast, and there's a whole bunch of different modes that will deliver faster inference. And how many of those fast queries you get, how much of the best chips are allocated to a particular tier that you're paying for is an economic question for the labs. There's a ton of developers and knowledge workers who are happy to pay hundreds of dollars a month or more, but they always want the best available model. This is most people in executive roles in startups.
Speaker 1:Yeah, I got my $200 a month subscription. I'll pay $250 or $100 or whatever, a couple $100, and it just makes me better at my job. I just do whatever I need to do. But don't give me the old thing. I want the best.
Speaker 1:I wanna know that the hallucination rate is as low as possible.
Speaker 2:1% of the time, it makes a career ending mistake. So having a product not Yes. Just an API business gives you leverage because at some point, the models are smart enough where you don't need to train them. You don't need to train a model that is 4% better because people are still coming to your application and having a good product experience. Yes.
Speaker 2:So historically, one of the critiques to Anthropix business
Speaker 3:Mhmm.
Speaker 2:Was that they have to just be on this constant, constant fly, you know, sort of hamster wheel of training the best model Yes. Because they have an a they're they're the majority of their business is this API business.
Speaker 1:They're not aggregator yet.
Speaker 2:Swap it out for a smarter model. That said, they have Cloud Code now Yep. Which gives them some more leverage over the
Speaker 1:market. And the really interesting thing is that Dario is now talking about being near the end of the exponential or maybe producing like the final models because we've talked to a few people about this, but it's very unclear if it's possible to create like a super intelligence that's like 5,000 IQ. It might just be they get good at all knowledge work and they can answer all tasks, but it's like the digital guy. At that point, it does commoditize and you drop out of Cournot equilibrium and you become more customers are more aggressive about switching to cheaper models to cut costs because the frontier is now commoditized in the entire backlog. Everyone is at the frontier, basically.
Speaker 1:And so in that scenario, you switch over to Bertrand competition, which doesn't really mean that profits go to zero, but there is more competition. And it looks a lot more like the hyperscaler cloud market, which is, I think, what people have been sort of signaling towards. And also, it sort of explains why a lot of the VC firms are getting in multiple companies because they don't think it's going to be winner take all anymore. They think it's going to be much more oligopolistic for the long term, and there will be competition between the major three or four labs. And it will be much more about how can you marshal enough supply, create a huge barrier to entry.
Speaker 1:Like you and I could start an AWS competitor tomorrow, but it's going to be extremely expensive to bring up data centers that just serve web apps everywhere, let alone AI stuff, right? Building all those data centers. You're thinking what I'm thinking? You're thinking you're thinking AWS is
Speaker 2:better? Thinking that I was thinking
Speaker 1:You were saying when does Steve
Speaker 2:I was talking with my buddy my buddy Ben on Sunday. We both live in Malibu. He was thinking of just getting some chips and setting up Malibu inference.
Speaker 1:There we go.
Speaker 2:Just just the name alone. It sounds like you could get at least at least Malibu inference.
Speaker 1:That's really funny.
Speaker 2:Would would be a beautiful name for for a Neo Cloud.
Speaker 1:Yeah. Let's play the clip of Dwarkesh Patel and Dari Amade discussing the economics of AI Labs. Let's just imagine we're in an economics textbook. We
Speaker 4:have a small number of firms. Each can invest a limited amount in, you know, or or or like each can invest some fraction fraction in R and D. They have some marginal cost to serve. The margins on that the gross profit margins on that marginal cost are like very high because because because inference is efficient, there's some competition, but the models are also differentiated. There's some there's some, you know, companies will compete to push their research budgets up, but like, because there's a small number of players, you know, we have the what is it called?
Speaker 4:Cornot equilibrium, I think is what the What's the Cornot equilibrium? Equilibrium is. The point is, it doesn't equilibrate to perfect competition with with with with with with with zero margins. If there's like three firms, if there's three firms in the economy, all are kind of independently behaving behaving rationally, it doesn't equilibrate to zero.
Speaker 3:Help me understand that, because right now we do have
Speaker 2:three leading firms and they're not making profit.
Speaker 4:And so what what That's a good question. Yeah. What what is changing? Yeah. So the the again, the gross margins right now are very positive.
Speaker 4:What's happen what what's happening is a combination of two things. One is we're still in the exponential scale up phase of compute. Yeah. So what basically, that means is we're training like a model gets trained. Yeah.
Speaker 4:It costs, you know, let's say a model got trained that costs a billion dollars last year. And then this year, it produced $4,000,000,000 of revenue and cost $1,000,000,000 to inference from. So again, I'm using stylized number here, but 75 These are my numbers.
Speaker 1:Let me
Speaker 4:Gross just paint a random margins and this 25% tax. So that model as a whole makes $2,000,000,000 But at the same time, we're spending $10,000,000,000 to train the next model because there's an exponential scale up. And so the company loses money. Each model makes money, but the company loses money. The equilibrium I'm talking about is an equilibrium where we have the country of geniuses, we have the country of
Speaker 2:geniuses but in the data
Speaker 4:that that model training scale up has equilibrated more. Maybe maybe it's still it's still going up. We're still trying to predict the demand. But it's more it's more leveled out.
Speaker 1:There is another fun clip that we should watch from A Beautiful Mind. Geordie, have you seen A Beautiful Mind? It won the Oscar for best picture, I believe. It's about the mathematician John Nash. Have you seen A Beautiful Mind?
Speaker 5:I have not.
Speaker 1:Wow. Unc status over there.
Speaker 2:I was walking on the beach with Senra. And we walked by an incredibly famous one of the top movie directors of the last probably ten years.
Speaker 1:Wait, really?
Speaker 2:And Senra was like, do you see that? And I was like, see what? It's a guy with a dog.
Speaker 1:That's hilarious. I feel like that your beach tour has been really star studded lately. This is a different from the previous one you mentioned. Correct?
Speaker 2:Yes. Yes. Wow. Yes.
Speaker 1:That's remarkable. Well, let's pull up the clip. It's from a beautiful mind.
Speaker 3:Ash, you might wanna stop shuffling your papers for five seconds.
Speaker 2:Is that Eric Lyman?
Speaker 1:Yes. It's Eric Lyman.
Speaker 3:Gentleman here.
Speaker 1:In the in the ramp biopic, we got we got we got our cast right here. Oh. This is the original, like, Lux Maxine movie.
Speaker 3:Everyone else feels she should be moving in slow motion. Oh. Will she want a large wedding, you think? Should we say swords, gentlemen? Pistols are done.
Speaker 6:Have you remembered nothing? Recall the lessons of Adam Smith, the father of modern economics.
Speaker 3:In competition, individual ambition serves the common good. Exactly. Every man for himself, gentlemen. And those who strike out are stuck with their friends.
Speaker 6:I'm not gonna strike out.
Speaker 3:You can lead a blonde to water, but you can't make a drink. I don't think he said that. All right. Nobody move. She's looking over.
Speaker 3:Why is she looking at Nash?
Speaker 6:Oh, god. All right. He may have the upper hand now, but wait until he opens his mouth.
Speaker 3:I guess that wasn't a history books. I
Speaker 1:think this this is very, very stylized.
Speaker 6:What are you talking about?
Speaker 1:And completely apocryphal. Like, he definitely thought of this theory, but not at a bar.
Speaker 3:They block each other. Not a single one of us is gonna get her. So then we go for her friends. But they will all give us the cold shoulder nobody likes to be second choice. But what if no one goes for the blonde?
Speaker 3:We don't get in each other's way, and we don't insult the other girls. Marked. It's the only way we win. That's the only way we all get laid.
Speaker 1:So he's describing the prisoner's dilemma.
Speaker 3:Adam Smith said
Speaker 1:Where everyone must work together.
Speaker 3:Best result comes from everyone in the group doing what's best for himself. Right? That's what he said. That's right. Incomplete.
Speaker 3:Incomplete. Incomplete. Okay. Incomplete. Because the best result would come from everyone in the group doing what's best for himself and the group.
Speaker 6:Ashley, this is some way for you to get the blonde on your own. You can go to hell. Governing dynamics, gentlemen. Governing dynamics. Adam Smith.
Speaker 3:Who's wrong? Yep. There we go. Careful. Thank you.
Speaker 5:Very
Speaker 2:fine. Dave says, just joined the stream. We watching a movie.
Speaker 1:Yeah. Of game theory going on in the AI wars right now. Everyone's trying to figure out how far to push it. There's a fair amount of risk. There's still the Cournot game of checking around who will invest the most in advancing the frontier.
Speaker 1:But the end state looks a lot more durable than pure model commoditization and the perfectly competitive situation that many were predicting a few years ago.
Speaker 2:Google Capital says, I thought Dourkech had a good point that software engineering is the only job where the full context needed to do the job is available to an AI agent via the code base. And I didn't think Dari had a good answer for why automating other jobs will be as easy. This got a bunch of lot of people reacting, disagreeing generally that all of the full context needed to do the job is available. But I do think something we need to figure out.
Speaker 1:Yeah, we were debating this because there was a post that was just sort of like a Wojak reaction that was just making fun of this. And it wasn't clear if they were saying that they were agreeing or disagreeing. But basically, my take was, well, it's possible that a lot of the full context needed to do the job of a lot of different white collar jobs is, in fact, logged. It's just logged in the final product, which is like a deck or spreadsheet or a decision, and then a whole bunch of emails, a whole bunch of Slacks, and then a whole bunch of Zoom calls that's recorded. And so yes, if you're running a business where a lot of work gets done in smoky bars late at night and back alley deal making, sure, that's going to be harder to automate.
Speaker 1:But in the world where it's someone sitting in front of a computer and there's a screen recorder running, like you should be able to pull up most of the context. At the same time, you can't just snap your fingers and go back and get every decision that was made in the '80s that allowed Coca Cola to become a dominant soda maker. But you You can't literally can't replace people
Speaker 2:on calls just being like knowing the call is being recorded and used used to train something to replace them, they're just like, I'll tell you offline. I'm not in speaking this secret into Yeah, yeah.
Speaker 1:Golf this weekend?
Speaker 2:Debate around the posture
Speaker 1:Yes.
Speaker 2:Dwarkash. Dwarkash, his posture was absolutely excellent. He looks fantastic.
Speaker 1:I love this sweater. The crew neck works really well. The the the pushed up sleeves is a particular choice. Didn't translate into that Chad Wojak, but he looks fantastic here. Lot of fun on the timeline looking at looksmogging or whatever, the looksmaxing.
Speaker 1:I don't even know. This was Framemogging. That's the
Speaker 2:Framemogging. Kept bringing up the example of of a video editor
Speaker 1:Yeah.
Speaker 2:Saying, yeah, but when will the models be good enough to edit videos well?
Speaker 1:Yes.
Speaker 2:Pick out moments? Yes. Give me two years and another $500,000,000,000 We've we've tried every tool. There is. They can't do it yet.
Speaker 1:Tricky. I don't know. I don't know what's
Speaker 2:And it's not even that we're not trying the tools to replace the people on our team. We're trying to make them have higher output.
Speaker 1:One interesting thing is that the there isn't there aren't a lot of open source, like, Premiere profiles. Like, I I've edited a ton of videos for YouTube. There's a whole bunch of cuts in there. What I cut out, what I didn't. You could have that record, but it's not stored in GitHub.
Speaker 1:It's just you can't necessarily train on it. You can train on the final product and understand, but you don't understand what actually got left on the cutting room floor. There's this whole concept of kill your darlings, like when you're in the edit, like you need to be cutting more. You're like, oh, I like that shot. It's so cinematic, so cool.
Speaker 1:But does it actually advance the story? No. So you cut it down. I was watching The Matrix this weekend, and there's this amazing shot of when Neo and Morpheus are going to visit the the Oracle and they reach for the doorknob and the doorknob has this perfect reflection and the reflection shows Neo and Morpheus. And they had to do this crazy VFX shot to hide the camera in more what looks like Morpheus' coat because if you point a camera at a mirror, you see the camera.
Speaker 1:And you don't wanna see the cameraman there. That ruins the shot. And so they did all this crazy stuff to, like, to, like, you know, cover up the camera. And I'd seen the behind the scenes and been like, wow. That's really impressive.
Speaker 1:And in my memory, I thought it was like, oh, it's such an important shot. They probably, like, lingered on that for, like, five seconds to really let it sink in. Like, they're they're pulling a trick on the audience. It's beautiful. Yep.
Speaker 1:It's like half a second. And they did all this work, and then they knew that, like, from a storytelling perspective, you don't wanna hang out and watch a picture of a doorknob for five seconds. And so all these decisions, like, they sort of get chronicled, but they don't get neatly organized in the way that a GitHub log does with with pull request discussions and what happens. So it'll be difficult. So maybe two years and another $5,000,000,000 does it, but it's coming.
Speaker 1:So we'll keep monitoring it.
Speaker 2:Andrew Reid says horses don't stop, they keep going.
Speaker 1:Wait, did he actually say that?
Speaker 2:Yes. No way. Yes. In response to 2026 being the year of the horse
Speaker 1:I love it.
Speaker 2:One of the greatest lyrics of all time.
Speaker 1:Originally, to explain the joke, it's a Young Thug song. And the actual lyric is hustlers don't Stop, They Keep Going. Oh, really? But it sounds like horses. And so people put horses don't stop, they keep going.
Speaker 1:They show the AI generated image of the horse bench pressing. And it's incredibly inspiring.
Speaker 2:There's a lot of young thug songs that are hard to really decipher. A 100%. Let's hit the size gong Yes. For this Pennsylvania Girl Scout. Six years old, breaks records selling 87,000 boxes of cookies.
Speaker 2:She's unstoppable. Unstoppable? That is How
Speaker 1:much is that? What's the ARR?
Speaker 2:Estimating that it's somewhere around $600,000 of sales
Speaker 1:It's amazing.
Speaker 2:At only six years old. Really incredible stuff. Heartwarming.
Speaker 1:That's awesome.
Speaker 2:India's Adani Group to invest 100,000,000,000 in AI infrastructure.
Speaker 1:We gotta hit the gong again.
Speaker 3:Hit it again. The Indian Corglomerate's investment may boost the country's ambitions to become
Speaker 2:an AI power. India's Andani Group and energy and logistics giant said it would invest a $100,000,000,000 to develop large scale data centers by 2035, the largest such commitment in India so far. Michael. Tyler, what do you think about the timing here? Is this gonna be too late?
Speaker 2:Are the clankers gonna like, is 2035? How how are we looking there? Is that that's I mean, it's singularity. Yeah. I I'm
Speaker 5:very bullish on on the clankers coming pretty early. Mhmm. So, you know, time will tell, I guess.
Speaker 1:I'll see.
Speaker 2:I cannot wait to pull up this clip.
Speaker 1:It is a big number that I feel like a lot of countries have been teasing big numbers. This is
Speaker 2:like they're kind of a mugging Macron.
Speaker 1:Yeah, this is like a really big number. You see a bunch of like multibillion dollar deals, multibillion dollar releases. But this is like a serious, serious, serious investment. So good news.
Speaker 2:Micron is spending $200,000,000,000 Congratulations for saying the biggest number. Micron is spending 200,000,000,000 to break the AI memory bottleneck for decades. Memory chips were low margin commodity products. Now the industry can't make enough to satisfy data center's hunger.
Speaker 1:Just like this one company is like, yeah, we're gonna spend twice as much as India.
Speaker 2:Micron Technology is the largest American maker of memory chips, the tiny slices of silicon that store and transfer data and help power everything from smartphones and car computers to laptops and data centers. Micron is rushing to add manufacturing capacity to invert the biggest supply crunch the memory industry has seen in more than forty years.
Speaker 1:Did you hear that the the PS six, the PlayStation six is now delayed because of memory shortages? 2021, pretty, pretty big delay to 2029. They really don't refresh
Speaker 2:You just created a trillion gamers. No, seriously, I think adding insult to injury
Speaker 1:Yeah.
Speaker 2:To
Speaker 1:The gamers might actually Rise up. The gamers might rise up. They might be an important voting block.
Speaker 3:A lot
Speaker 1:them are a lot of them are of age to vote and a lot of them would rather have new gaming hardware necessarily AI slop in the feed. And they're like, yeah, I can't afford the new PC that I wanted. What do think?
Speaker 5:I don't know. I mean, I feel like this says a lot about how good the PS5 is, right? Because they can like afford to just like postpone the PS6. Like what games Oh, now you don't want
Speaker 2:technological progress?
Speaker 1:Wow. I want it to
Speaker 5:go to the data centers. I care about the next game graphics. Have they gotten that much better in the past five years? Yeah, maybe. But it's like Yeah.
Speaker 5:For the actual gameplay, is it that important if like, the actual pixels are
Speaker 1:Realistically, lot of this stuff should be moving to the cloud soon if it's not already. And then if you're running in the cloud, you can upgrade the hardware. And in theory, you should be able to run like a Gen AI upresing past to make it more photoreal. And I feel like that's going to be where more of the juice is squeezed out of the graphics than just continuing on the traditional path of like more pixels, more ray tracing. It'll be make a really beautifully designed video game that works really well, really tight deterministic interactions, so it's satisfying.
Speaker 1:And then give it a layer
Speaker 2:of We got to have like a ram trader on the show, really somebody that's in the thick of deal making in this space. Moving on, Lucas Shaw was on a tear over the weekend reporting on the Warner Brothers Paramount conversations. He says this morning, Warner Brothers is going to resume talks with Paramount after two months of rejecting them playing mind games. The company still says it's committed to Netflix, but needs to find out just how much the Ellisons will offer. He originally reported on this Sunday but it's being confirmed today.
Speaker 2:Again, we kind of knew this was gonna happen. If the Ellisons had been saying we're making a we're giving you a big number but it's not our biggest number. It's not our best and final. So no surprise here.
Speaker 1:Let's flip over to Claude Bot. Kent Dodd says, name's the thing Claude Bot. Claude asks for a rename, renames to OpenClaw. OpenAI buys it. Legendary Legendary couple of weeks.
Speaker 1:No confirmation on buying. It's an open source project. They're keeping it open source. There's a whole bunch of different
Speaker 2:Yeah, names Dave Moran, remember reading, is going to step in to, I believe, run the foundation that will steward the open source project. And then Peter's obviously joining OpenAI. I'll take this day off to figure out this whole Open Claw thing, every entrepreneur on President's Day weekend. We've talked about this on the show before. Long weekends are really good for AI progress and AI diffusion.
Speaker 1:Petition for three day weekends to speed up AGI timelines probably would work for sure. West Fumblegate. Fumblegate. Did Anthropic Fumble Open Claw?
Speaker 2:Will Brown says honestly crazy that Open Claw sold for 1,000,000,000. Like he's really the first solo $5,000,000,000 founder. Time will tell if it's worth the 15,000,000,000 that OpenAI spent on the acquisition, but it's pretty wild that you can just vibe code an open source project to make 40,000,000,000 in a couple months now.
Speaker 1:It really, really nails it because everyone jumped immediately to a billion, Immediately.
Speaker 2:Off of nothing.
Speaker 1:Off of nothing. Off of, like, one rumor. It's very, very funny. Who knows? Alex Cohen breaks it down for Gen z.
Speaker 1:If you're wondering what happened today, Claude was mocking OpenAI for weeks. Then this gem cell dev ships Claude bot, which was the fastest growing open source thing ever. Absolute lux max for the whole ecosystem. Anthropic tries to dairy goon him with legal. Dev renames to OpenClaw.
Speaker 1:OpenAI slides in like a void pulling Chad with acquisition interest. OpenClaw gets acquired by OpenAI. Now Anthropic is getting jester goon by the entire timeline and OpenAI is gigamaxing off their fumble. Open Anthropic could have just let them cook him cook. Instead, they went full moid and got outframed by the jester maxers at OpenAI.
Speaker 1:And so Luxmax like lingo is really it feels like hilarious. I I do wonder the half life. I feel like it's
Speaker 2:I think it's over.
Speaker 1:It's gotta be I think it's towards the end of this boom. But the rise of of the of the kick streamers is certainly the story of the year. Certainly the story of
Speaker 3:the year.
Speaker 2:What did Claude do? What did Claude do? Pentagon has said that Anthropic will pay a price. There was reporting last week that Claude was leveraged in some way during the Maduro planning, the planning of the Maduro raid. I was imagining in my head Dario as Walter White in the SUV just being like No.
Speaker 2:Watching the logs and seeing Pete Hegseth running a deep research report on Maduro.
Speaker 3:Who
Speaker 2:is Nicholas Maduro? He's just like, no.
Speaker 1:No. Don't do it.
Speaker 2:Yeah, very unclear how it was used. But a lot of pushback. SAG AFTRA
Speaker 1:What happened?
Speaker 2:Put out a statement
Speaker 1:Yeah.
Speaker 2:On C Dance two point zero.
Speaker 1:It's not a comment. It's a statement. The
Speaker 2:Chinese have been quivering in fear ever since
Speaker 1:Oh, no.
Speaker 2:Zag came after them.
Speaker 1:What did say?
Speaker 2:Zag stands with the studios in condemning the blatant infringement enabled by ByteDance's new AI video model, Sea Dance two point zero. The infringement includes the unauthorized use of our members' voice and likenesses. This is unacceptable and undercuts the ability of human talent to earn a livelihood. It is kind of interesting that just in this statement they're admitting to saying, it's so good, you're going to make it impossible for our members to earn a living, which doesn't actually
Speaker 1:It says undercuts. Undercuts. Doesn't say eliminates.
Speaker 2:CDance two point zero disregards law, ethics, industry standards, and basic principles of consent
Speaker 1:AI development demands responsibility. That is nonexistent here. Completely correct. Some of the C Dance videos are insanely infringing. It's just like, wow, it's Larry David.
Speaker 1:Beginning of the end, says Growing Daniel. Disney, also
Speaker 2:as expected, sent a cease and desist letter to ByteDance over Sea Dance two point zero. Wonder It's crazy. I wonder how ByteDance will actually react to this pushback. They expected it.
Speaker 1:Yeah.
Speaker 2:They know that they're not abiding by a number of different US laws. Whether or not they care is another thing. If you make an AI version of Andrew Huberman and you get in a fresh ad account, you can probably start spending money before the Huberman Lab team finds out what you're doing.
Speaker 1:I would disagree. I think Rob's on top of it. I think he's goaded. Anyone else, any other team would be cut.
Speaker 2:I don't know. I mean, he might respond faster than the others, but this has certainly happened. So this is interesting. If you're already an A list massive superstar, I think you see some stuff like this. And you're actually like, great, I'm going to be able to shoot a movie in a week from LA.
Speaker 2:I'm not going to have to travel to these insane, exotic locations and spend a week in the desert filming all these clips. So if you're like a Timothee Chalamet, this is maybe like, yes, you're worried for the overall industry. But at the same time, you're thinking, Okay, name and likeness is now infinitely scalable. I can still restrict the supply to some degree. I'm not going to tell any movie studio, hey, you can make a movie with me, whatever.
Speaker 2:You're still going to restrict it. The question becomes new talent that's emerging, trying to build their brand. At what point do studios say, we're just going to make a character we're going to make a new actor out of thin air, place him across different movies, build him up over time. You could imagine I don't think a company like CAA would do this because all their talent would be like, what are you doing? Taking our job.
Speaker 2:But I could imagine a group trying to make a little Michaela style actor that you build up over time. One thing that we'll find out is how much does the actual actor's real life matter in the context of their career. Like if Timothee Chalamet is dating Kylie Jenner, does that increase his appeal on the big screen? And I would say yes, probably. There's so much fixation on the lives of all of this talent.
Speaker 2:Can't wait to
Speaker 1:tomorrow at 11AM sharp Pacific.