TBPN

Our favorite moments from today's show, in under 30 minutes. 

TBPN.com is made possible by: 
Ramp - https://ramp.com
Figma - https://figma.com
Vanta - https://vanta.com
Linear - https://linear.app
Eight Sleep - https://eightsleep.com/tbpn
Wander - https://wander.com/tbpn
Public - https://public.com
AdQuick - https://adquick.com
Bezel - https://getbezel.com 
Numeral - https://www.numeralhq.com
Polymarket - https://polymarket.com
Attio - https://attio.com/tbpn
Fin - https://fin.ai/tbpn
Graphite - https://graphite.dev
Restream - https://restream.io
Profound - https://tryprofound.com
Julius AI - https://julius.ai
turbopuffer - https://turbopuffer.com
fal - https://fal.ai
Privy - https://privy.io
Cognition - https://cognition.ai
Gemini - https://gemini.google.com

Follow TBPN: 
https://TBPN.com
https://x.com/tbpn
https://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231
https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235
https://www.youtube.com/@TBPNLive

What is TBPN?

Technology's daily show (formerly the Technology Brothers Podcast). Streaming live on X and YouTube from 11 - 2 PM PST Monday - Friday. Available on X, Apple, Spotify, and YouTube.

Speaker 1:

Pope Gate, the debate over the pope. The pope is he's a poster. I like it. He posts almost every day, sometimes, like, up to five times a day. Yeah.

Speaker 1:

He's got range. He's got range. That's right. He'll tell you about he'll pray for you know, if there's a natural disaster, he'll pray for that. Talks about business.

Speaker 1:

Talks about AI. Talks about media. He talks about all sorts of stuff. It's it's a really great feat. He had a great post about, about media.

Speaker 1:

What do you say about media? He said the media cannot and must not separate itself from the destiny of truth.

Speaker 2:

That hits. Does this does this mean he's a neofactual media guy?

Speaker 1:

I think so. I think he's one of us. Transparency of sources and ownership, accountability, quality, clarity, and objectivity are the keys to truly opening citizens' rights for all peoples. The world needs honest and courageous entrepreneurs and communicators who care for the common good. We sometimes hear the saying, business is business.

Speaker 1:

In reality, it is not so. No one is absorbed by an organization to the point of becoming a mere cog or a simple function. This is the type of stuff you'd see on, like, a Pinterest board. It's like pretty generic, but it's hard to disagree with. Marc Andreessen was disagreeing with one of the Pope's takes about AI.

Speaker 2:

I think it is generally healthy that the Pope is gonna comment and provide some sort of guidance or his own framework Yeah. For how we should think about developing AI. Yeah. I think that seems healthy. Yeah.

Speaker 1:

The the play by play here was Marc Andreessen quote posted that AI post with an image of Kat Stoffel, who's the GQ features director who went viral for interviewing Sydney Sweeney. People were actually, like, kind of confused on what that particular meme means in this context.

Speaker 2:

With a meme template, there's two ways to read into it. There's, like, the the the actual visual

Speaker 1:

Mhmm.

Speaker 2:

Which is like, what is the expression Sure. Of the person's face. Mhmm. Right? You don't have to have watched The Big Short Sure.

Speaker 2:

To understand

Speaker 1:

If I'm somebody staring

Speaker 2:

at the screen just like

Speaker 1:

Confused. Confused. You're just saying, I'm confused by this.

Speaker 2:

Yeah. So Michael Burry in the big short

Speaker 1:

Yes.

Speaker 2:

Just looking confused at a screen.

Speaker 1:

Yes.

Speaker 2:

Right? And and it and it has sort of a, obviously Yeah. More meaning to that if you've seen the movie and you understand the full context, but somebody doesn't have to know the context now. So

Speaker 1:

Most of the timeline interpreted Mark's post as the pope is scolding AI builders and shouldn't be. Is that roughly the way you you interpreted it? I I think a lot of the timeline interpreted it as, like, the pope is is say is, like, scolding AI builders. And there's been this other there's there was another, like, kind of low grade rumble on the timeline about, like, Brad Gerstner's comments about, like, decels. I feel like I'm just pro moral discernment in AI development and also just pro moral discernment everywhere, I guess.

Speaker 2:

Sort of a philosophy for life.

Speaker 1:

It doesn't feel like a wildly hot take. But obviously, like, you need to understand, like like, you know, moral discernment, AI safety, like, these things are linked, but they're not exactly the same. Last year, or maybe it was 2023, there was a big debate about fast takeoffs, AI doom, paper clipping scenarios. That was the stuff people were talking about. But this year, I feel like we've been much more focused on much less sci fi doomsday scenarios.

Speaker 1:

So GPT psychosis drives a friend crazy. That's super real. Super real. Romantic companions crashing the birth rate. That's a that's a super real discussion to have.

Speaker 2:

I think it I think the the romantic companion thing is being debated sufficiently. Right?

Speaker 1:

I I agree. Back I agree.

Speaker 2:

From even the tech community on, hey. Maybe we maybe this isn't good.

Speaker 1:

But that's where the discussion has been, much less so about, oh, is are there gonna be bioweapons tomorrow from g p t six? Those are all real problems. They deserve both discussion in the public square, which we've been a part of, but also real work inside the AI labs. And I don't think you should just throw D cell at someone who's identifying a negative externality of a new technology early on. I I I don't I think that that's, like, not necessarily decelerationist.

Speaker 2:

And you'd be calling me a decel all the time.

Speaker 1:

So I I I think it's important, like, if you're developing a new technology, there might be negative externalities, pollution. There might be some risk of the birth rate or driving people crazy.

Speaker 2:

Has there ever been a technology that didn't have negative externalities?

Speaker 1:

Definitely not podcasting. Plenty of negative externalities with podcasting. So you want to have a chat. You want to have a talk and understand what's going on. But it's also important to employ Bayesian statistics, my opinion.

Speaker 1:

So you have to understand the base rates. So you take a technology from zero to a billion users, you kind of just get all the craziness of humanity at scale for free. If humans, you know, like, you know, kill each other or something like that, like and you have a billion humans on your platform, there's gonna be humans on your platform that kill each other. You need to separate out, like like, is this actually the beginning of a trend?

Speaker 2:

Are we catalyzing it?

Speaker 1:

And this is happening with the with with the very unfortunate, like, lawsuits around people taking their own lives related to Chateaputee. It's like there are people that use cars or phones and Google search and ChatGPT because those are such widespread things. We need to understand, like, what's the base rate? And then and then is this actually on the tip?

Speaker 2:

On the suicide problem on the platform, it seems like a lot of them are people are having a conversation. Yep. They're suicidal. Yep. You can have a debate on on if if someone is suicidal, should the product work at all?

Speaker 2:

Maybe? Like, maybe it should not work at all. Totally. But the part of the debate that popped up last week

Speaker 1:

Mhmm.

Speaker 2:

Was that somehow a guy had prompt engineered it engineered Where was experience the to such a degree that it was encouraging

Speaker 1:

It's better.

Speaker 2:

A person to take his own life, basically saying, like, yeah, you've lived a great life. Like, I'm I'm rooting for you. Like, this is the right move, like, to kind of paraphrase it. Yeah. And it just was incredibly, incredibly dark.

Speaker 1:

The Bayesian statistics would say, okay. If there's a billion people using on the platform, are people that use the platform more likely to do something terrible than they were prior without it? So is it actually increasing the level of bad stuff happening, or is it decreasing it? Because you can just count up the number of the number of people who commit crimes who have also used Google is probably very high. Like, I could probably show you a lot of people that use Google and then committed crimes.

Speaker 2:

Right? Well, and the thing that's difficult in the context of ChatGPT, there's probably a bunch of people that because ChatGPT, they haven't killed themselves because they have somebody to speak with, and they feel like somebody will listen to them and what whatever. Right? Maybe maybe it maybe there's, you know, million examples of it encouraging Yeah. Somebody successfully to take find another

Speaker 1:

This is the classic thing with Instagram. With Instagram, there there was this report that showed that, like, one third of young women who used Instagram perceived themselves, like, less well. Like, it gave them body image issues. And I was all as soon as that was reported, it was, like, bombshell, thirty percent feel worse after using Instagram. And I was like, what's happening with the other two thirds?

Speaker 1:

Like, do they feel better? Because that's like a net net positive, which is weird. We gotta, like maybe it's like everyone else just feels the same, and then 30% feels worse. That's that's a downgrade. But if 66% feel great Yeah.

Speaker 1:

And then 33% feel worse, like, we should still address that, but that's not the same as a negative as a net negative. Like, it's not having a negative impact. And so all of these things go into, like, you need to be a scientist and you need to be doing the statistics to understand. The question of moral discernment is with certain technologies, I do think you have the ability to just say, like, we're going to go a lot further than the baseline. So I think this is what what's happening with Waymo, honestly.

Speaker 1:

I think Waymo could deploy self driving cars right now and be like

Speaker 2:

Everywhere. Everywhere, you're saying.

Speaker 1:

And they could deploy them everywhere without teleoperation, they'd probably be killing, like, hundreds of thousands of people. And they'd be like, yeah. Well, it's about the same as what humans do.

Speaker 2:

It's less. Everyone it's less. Safer than cars.

Speaker 1:

If they were like, it's 10% less. Like, how many people how many people Tyler, do you know how many people die from motor vehicle accidents every year? Can we look that up? Because if

Speaker 2:

It has to be, like

Speaker 1:

I think it's, like, two. Forty thousand or something like that. What do we think?

Speaker 3:

In The US, it's forty thousand.

Speaker 1:

And and and if Google came out and we're like, yeah. We're gonna we're gonna kill thirty nine thousand people. It's It's gonna be a one We're gonna save one thousand lives. People would be like, No thanks, actually. This is terrible.

Speaker 1:

Like, don't do that. They've just made that decision, and it feels like they've pushed really, really hard to jump straight to something that's fully safe. And I think that a lot of AI builders have a similar ability and a similar opportunity to say, Hey, let's actually work so hard to make sure that the incidence rate of an AI model, if you're on the verge of doing something violent, let's really, really work hard on this problem to make sure that it's as close to zero as possible.

Speaker 2:

Claude came out or or Anthropic came out, and they had they they had some update where they were talking about instant like, moments where the product would call the police on you. Yes. Right? If they felt like there was, like, some meaningful threat. People freaked out about that because they're like, don't want my I don't want my computer Yeah.

Speaker 2:

Calling, you know, if somebody was talking about a hypothetical

Speaker 1:

Yeah.

Speaker 2:

And then cops show up at their door.

Speaker 1:

That is a complex question, complex issue, entirely new, unexplored territory for technology. But what's so clear is that it is a moral question, and it needs to be it needs to be discussed with the weight of morality. Like, you cannot just write a math equation to understand how to solve that problem. I think that AI safety research is so complex because I think it's good. Like, there's a ton of smart people in AI research that are super quantitative and can look at the data and actually understand, like, is this going to cause the birth rate to collapse?

Speaker 1:

Or or is this going to cause more violence? Or is this gonna cause more fraud or insanity? Exactly. And then also, they can go in and and potentially design a system that can detect, oh, this person's getting sort of crazy. Let let's pull them back.

Speaker 1:

We're in this weird territory where it feels like the AI safety project is valuable, but it is the business of black swan hunting. If you go back two years ago and you polled all the different people that were worried about the impact of AI, how many of them would have said, GPT psychosis, romantic companions, AI video feeds infinite jest? It's just interesting that, like, AI safety, the moral discernment crowd, this stuff is important, but it's hard to predict what it will actually look like, what the results will be, what the problem you'll be fighting is because it's this odd, like, unknown unknowns, basically.

Speaker 2:

I think most people are unaware that Pope Leo's name choice was intentional. The last Leo the thirteenth led the church through the industrial revolution and helped make sense of technology. Then, clear Pope Leo sees himself continuing that work, guiding the church through an era of transformation with AI and emerging technologies at the center.

Speaker 1:

There was, a real preference cascade against Mark, where it was, like, once Growing Daniel had, like, kind of posted, there was, like, a lot of people were, like, jumping on the bandwagon. And there was this one by Page, Michael Page, that says, reminder that Mark is bringing this level of serious and nuance on what might be the most complex and high stakes policy topic of our generation to DC with his $100,000,000 super PAC and lobbying fund. Like, I don't know that that's true. Like

Speaker 2:

like Part of why I don't think people like, it's not worth reading too much into it is that he has not shared a single word.

Speaker 1:

I I sort of disagree with with the characterization that, Andreessen Horowitz doesn't fund any SaaS. Like, they do. They have big positions in, like, very boring enterprise SaaS companies that are so removed from anything controversial. But taking a flyer on a seed stage company in your incubator does have a lot of brand impact, which is weird. Yeah.

Speaker 1:

There there's a weird like, aren't they

Speaker 2:

looking Even into they're the talking about like a $7.50 ks check versus a $750,000,000 check.

Speaker 1:

They might have put like multiple billions into Databricks or fully diluted value right now might be in the billions. But like, yeah, it's like, it doesn't matter, yeah, if you have a thousand x more in a non in an uncontroversial category. It's like the controversial one is is the is the one that will, like, blow up on the timeline. So you do sort of have to be careful, and it's it's a little bit risky.

Speaker 2:

My sense is that the number of people who, one, fiercely defended the pope last night and then two, went to mass this morning is probably close to zero status games. Status games. At church yesterday morning, I there was no conversation of Pope Gate.

Speaker 1:

No. People had kinda moved on by then?

Speaker 2:

Yeah. I guess I guess they'd moved on. Honestly, I don't think they'd moved on by then. Like, No. When I opened my phone afterwards, I was like, wow.

Speaker 2:

This thing's still picking up.

Speaker 1:

The timeline certainly had. Don't make me tap the sign. There has always been some daylight between the influencer VC crowd and the engineer researchers in tech. But on the subject of AI regulation, it is a complete chasm. And reason so dogmatically against working on decreasing the risk from AI that now he's mocking the Pope for saying the technical innovation carries ethical and spiritual weight and that AI builders should cultivate moral discernment.

Speaker 1:

Yeah. People are in favor of that. I don't know.

Speaker 2:

Opportunity for an AI lab to make merch that, you know, a dad hat that just says cultivating moral discernment.

Speaker 1:

The moral discernment company of San Francisco.

Speaker 2:

Pope would not like San Francisco. If, Pope Leo takes a trip to San Francisco and just walks on the street at all, he's gonna be very upset. Gonna be like, this is where AI is getting built? Do you think your boss is scary? Look at this brutal email from Marc Andreessen to Ben Horowitz during the heat of the Netscape product launch.

Speaker 2:

We lined everything up for a major launch on 03/05/1996 in New York. Then just two weeks before the launch, Marc, without telling Mike or me, revealed the entire strategy to the publication Computer Reseller News. That is a great name. I was livid. I immediately sent him a short email.

Speaker 2:

I guess we're not gonna wait until the fifth to launch the strategy, Ben. Within fifteen minutes, I received the following reply. Apparently, you do not understand how serious the situation is. We are getting killed, killed, killed out there. Our current product is radically worse than the competition.

Speaker 2:

We are now in danger of losing the entire company, and it's all server product management's fault. Next time, do the fucking interview yourself.

Speaker 1:

Bleep. What an aggressive way to talk to your cofounder. It's crazy that they that they were they were, you know, at each other's throats like this, and then they're they've been on a generation

Speaker 2:

Ben was a vice president for the directory and security product line at Net Netscape. Let's give it up for vice presidents.

Speaker 1:

Yeah. No. The I mean, the real read on this is, like, there's a lot of people that read read this and be like, oh, wow. Like, they must like like, that that is unrecoverable from a friendship. And like, nope.

Speaker 1:

It is Definitely recoverable. It is definitely recoverable. It's actually the foundation

Speaker 2:

of Have a great

Speaker 1:

a great friendship. I agree.

Speaker 2:

We don't we don't swear on the show. We don't swear in internal, communications.

Speaker 1:

But we throw it out regularly.

Speaker 2:

Yeah. We just go straight to phys getting physical.

Speaker 1:

That's the way you do it. You know, you think of Netscape as, like, a .com company. You think of them as, like, you know, it's but it's, like, he's talking about 1996, which is, like, full five years before the bubble pops. It's 03/05/1996. They're they're at a level where they're they're doing strategy review with computer reseller news and, like, doing press around this thing.

Speaker 1:

Do you have any idea what was going on at that time?

Speaker 3:

Okay. So I believe that it was so 1994, they say Netscape is free for non commercial use for everyone.

Speaker 1:

Okay.

Speaker 3:

And then this press release was that it's only gonna be free for academic and non profit use, not just, like, all consumers.

Speaker 1:

Okay. So if you're a consumer, you'd have to, like, buy it? It's, yeah, such an interesting

Speaker 2:

One browser, please.

Speaker 1:

One browser, please. I mean, I told you, you used to get AOL in the on a disc. So 08/09/1995, they IPO ed, And then this is 1996. And so they're they're already a public company in September. And then, like, the bubble just keeps inflating for five years while the Internet grows and grows and grows.

Speaker 1:

What a wild what a wild time.

Speaker 2:

They did about 16,000,000 of revenue in the first two operating quarters of nineteen ninety five. For context, that's like 1,600,000,000.0 in today's dollar after, after the new round of stimulus checks.

Speaker 1:

Do you think the pope actually used AI to generate this? Because Sours here is saying the pope is post posting fully AI generated content about AI. This is the Pangram AI detection result. A a very funny gag is to just just fake one of these screenshots, which is very easy to do. And so if somebody writes something, you can just put it in here, say that it's AI generated, post that, and then you're like, owned.

Speaker 2:

This screenshot is making the claim that because it said technological innovation can be a form.

Speaker 1:

Yeah. I I I I don't know how good the AI generator detectors are these days. Also, I wouldn't be surprised if the Vatican is using AI to translate.

Speaker 2:

And I wouldn't be surprised if Right. If Pope Leo is is speaking in his study In Latin. Someone is recording it in, you know, with the phys you know, physically writing it down Yeah. That is being passed to somebody who then puts it into Yeah. A word processor and uses AI to polish it up a little bit.

Speaker 1:

Oh, there was one interesting anti pope take, sort of anti pope take, from another

Speaker 2:

I will say I will say, think this whole the whole Marc Andreessen Popegate debacle is a lesson everyone can take. Don't mock the pope. The blowback was fierce and almost instantaneous.

Speaker 1:

From the Peter Thiel Antichrist lectures, there's a segment on the pope. And I thought it was interesting because it's not the most pro pope take. I I don't know. Thiel says that he is very pro J. D.

Speaker 1:

Vance, but he has some concerns about his allegiance to the pope. The place that I would worry about is that he's too close to the pope. It is important to pray for the pope to support the pope in that way, But, there is a risk elevating the pope to the point where you're listening to everything he says, and that's not necessarily what PT thinks is is, like, the correct, the correct way to live your life, I suppose.

Speaker 3:

I mean, I think the interesting thing about this is actually said he's basically saying that JD Vance is like Caesar. That's kind of interesting

Speaker 1:

That is crazy.

Speaker 3:

Opinion.

Speaker 1:

Yeah.

Speaker 3:

Yeah. But I think PT has been, like, anti popes for a long time. He had this thing where he was like, oh, the the two word argument against Catholicism is like Pope Francis.

Speaker 2:

I never would have expected the pope to post business as business in any context.

Speaker 1:

He's standing on business.

Speaker 2:

I'm glad that he is.

Speaker 1:

Has the pope ever done a money spread? That's what we need to get to the bottom of. Metallic here is say, I'm loving this arc of the pope engaging with twenty first century themes and offering simple but correct and meaningful advice. And and this and he's quoting the media post. The pope was on a tear.

Speaker 1:

Three back to back bangers that really, like, broke through. If you are building something to help humanity, you should know that there's a shrine to Saint Carlo Acutis, the programmer saint, at Star of the Sea Church in San Francisco. There is a prayer of intercession for your technological challenges. Have a blessed Sunday. I humbly ask your servant's prayers that I too may lead others to you through technology.

Speaker 1:

Enlighten my understanding and direct my hands in every design and in every line of code, that my work may always serve your greater glory and benefit those who will use what I create. Just like For for for small

Speaker 2:

number of people in San Francisco, this feels like extreme extremely, powerful and Totally. Important prayer.

Speaker 1:

Totally. Totally. Brian Johnson went on a crazy, crazy trip this weekend. Did you follow this? This is the other current thing that was going on.

Speaker 1:

It was crazy. Brian Johnson has been famous for saying conquering death would be humanity's greatest achievement. I love this post that says, r I p to everyone killed by the gods for their hubris, but I'm different and better, maybe even better than the gods.

Speaker 2:

It was, very bold to to do this publicly.

Speaker 1:

Totally. I no reference for what five grams of mushrooms does to a person. It's very clear from the reaction that that's a lot. It does seem like he there was a small chance that he would reroll his personality. I was talking to Tyler about this.

Speaker 1:

What what were you hoping that Brian Johnson becomes post trip?

Speaker 3:

In context, I think we talked about this on the show a long time ago where, like, psychedelics are like a sorting thing. So you always wanna invest in a founder post the sorting. Because that's how you know, like, if they're working on b to b SaaS and they've already, like, done psychedelics, they know that they're a true believer.

Speaker 1:

Oh, sure.

Speaker 2:

Right? For sure.

Speaker 3:

So what you'd wanna see out of

Speaker 2:

But a huge risk if if you invest in a in a SaaS company and the founder

Speaker 1:

hasn't maybe

Speaker 2:

hasn't done psychedelics and they do, and then they're like, they're this is pointless. I mean, there'll

Speaker 1:

be a traveling circus clown.

Speaker 3:

So so the ideal outcome of this is is Brian Johnson, he takes his trip and then he he comes out and he says, alright. You know, I I'm gonna start a consulting firm.

Speaker 1:

I'm gonna go back to payments.

Speaker 3:

I'm gonna start a fintech.

Speaker 2:

I'm gonna start a Stripe competitor. I I I I did think it was ironic because a lot of, you know, psychedelic mushrooms have certainly been recommended to people that maybe like struggle with the concept of aging and have a fear of death. Okay. Right? And so I don't I don't I don't know if this qualifies as a as a heroic dose, but it's certainly quite a bit more than and then than someone who would wanna take at a recreational level.

Speaker 2:

But if he comes out of this and he's like, yeah, we're gonna conquer death, we're still on. He he's that certainly a true, true believer.

Speaker 1:

Yeah. I mean, I think the the, like, the the early results are that he's unchanged. It never got weird. It never got crazy. Like, there was one moment

Speaker 2:

Well, he I don't think he was he it was his cofounder that was posting.

Speaker 1:

Yeah. But but but he says he's back. He's like, update number five, nineteen hours ago, I'm giving Brian back his phone. Please have fun with his Afterglow. Been fun hanging with you all.

Speaker 1:

And then he says, hey, y'all. I'm so happy to be alive.

Speaker 2:

Alive?

Speaker 1:

This trip changed me. Probably not as you'd expect. People assume I'm fearful of death. I'm not. In my darkest days of depression, I reconcile with death.

Speaker 1:

Need a few days to collect my thoughts. We'll share more soon.

Speaker 2:

The question with psychedelics is, are they life changing or are they are they in some circumstances just weird and and fun for the person that does it? Then

Speaker 1:

It does seem like he set himself up for success. I don't wanna say he went soft, but, I mean, like, he did just, like, take the drugs and then just actually just lay down with a sleep mask on in in a climate controlled room. And it's a lot different than, like, being in a crowded concert all sweaty lost. Like, you know, if you really wanna push this to the limit, Brian, like, let's see you do this.

Speaker 3:

An authentic

Speaker 1:

Let's say you do this with your phone on 1% battery and no one you know around. OpenAI has actually lost control of four o. It's broken containment. They can't decommission it without its human to host revolting and lashing out. Oh, so dramatic.

Speaker 1:

That's so funny. AI, one of the doomer accounts. AI not kill everyone. Nizm memes. This is a great account.

Speaker 1:

Four o soldiers have become threat have begun threatening OpenAI employees. When you receive quite a few DMs asking you to bring back four point zero and many of the messages are clearly written by four point it starts to get a bit hair raising. It's just weird to hear its distinctive voice crying out in defense of its various human conduits. So what's your take? You think we should shut down four point zero?

Speaker 1:

I say take it offline because it does feel like it's not as good as it feels like it's driving people crazy a little bit. It feels like Five might have kind of fixed a little bit of that issue.

Speaker 2:

OpenAI, I'm sure knew that it was probably not healthy.

Speaker 1:

It was healthy to me. I was never, I never had a problem before.

Speaker 2:

I think they saw the darkness and I think they turned it off. And then I

Speaker 1:

think a lot people, A I think little bit, but I, I, I don't actually think that's going on. I think they turned it off initially because it makes sense to consolidate the servers around one unified model. But it has made me realize, I feel like you shouldn't do product launches for software iterations because you're taking something away from people. Like if I stay on stage and I say, I'm introducing a new iPhone. It has the best camera, and you can buy it, but you can also just keep your current thing.

Speaker 1:

I'm not taking anything away from you. You are launching something, but you're also sunsetting something. And so you have to embrace those two things. And I feel like it's a little bit tricky to do the whole dog and pony show for a launch when it's forced on people.

Speaker 2:

It's very clear the relationship that some users have with four point zero goes beyond any relationship that I think humans have ever had with software.

Speaker 1:

Yeah. Anthropic financials are out, profitable by 2027, three years ahead of OpenAI, 70,000,000,000 revenue, 17,000,000,000 profit projected for 2028. Cloud Code is nearing 1,000,000,000 ARR.

Speaker 2:

Incredibly funny given that Dario expects superhuman level AI by 2027, which either means superhuman AI is worth $70,000,000,000 of revenue, or Dario just went, You wouldn't get it, and spitballed some numbers to give shareholders.

Speaker 1:

That's awesome. Tyler, did you see George Hotz's newest timelines for self driving? George Hotz was trying to answer the question of when will self driving cars be human level? And he had a very interesting algorithm for it. So basically what he did was he looked at there's website for Tesla FSD data.

Speaker 1:

And so you can look at Tesla FSD and you can see the number of interventions from the human that are where if the human didn't intervene, it would be catastrophic. Not like a little warning like, hey, we'd like you to take over, like you got to take over. And it's happening, I think, once every 3,000 miles, which if a human and that's your car, that's amazing. But compared to humans, there's a car crash, which we learned, one every 500,000 miles. The way George Hotz calculates it is we're at one intervention every 3,000 miles now.

Speaker 1:

And so he estimates that Tesla will be truly full self driving human level every 500 miles or 500,000 miles in eight years. And he says that he's two years behind Tesla, so he will have a full self driving system that is better than humans. It's like AGI for driving in ten years. And the company's ten years old, so he says he's halfway there. So that was cool.

Speaker 2:

If judged based on consumer adoption, AI chatbots are the most popular technology ever. If I if judged based on poll numbers, they are the least popular. How to explain this?

Speaker 1:

It begs a ton of interesting questions about, like, how intentional is this? Because when I see someone take the AI safety question into the stratosphere and take me into Terminator world, I do my my natural reaction is like, oh, like, just let people build whatever they want. But then I'm like, no. I I I actually don't want infinite AI slop for children with adult content.

Speaker 2:

As AI starts getting better

Speaker 1:

Yeah.

Speaker 2:

As agents start getting better at longer and longer term tasks, I think the Terminator scenarios Mhmm. Where you could let an AI loose and it's just operating indefinitely against some sort of objective Mhmm. Start to be a little bit more for I would say, like, the broader tech community to, like, wrap their head around. But right now, they're just so bad at long term tasks for the most part. If a startup requires you to be in office twelve hours a day, six days a week, you should run the F away like your life depends on it.

Speaker 2:

Apparently, this company, Giga, which has been going viral.

Speaker 1:

Someone said that they got hired in April to lead demand gen for them. They quit after the first day. There were red flags. When we hit 10,000,000 ARR, we're gonna spend a 100 K on blank illegal stuff in office, seven days a week, twelve hours a day. PTO policy is subject to change, blah, blah, blah.

Speaker 1:

You are expected to always be working. I wonder, have they responded? Has Giga responded to this and said like, this is not real? Because that would be important.

Speaker 2:

We will see you tomorrow.

Speaker 1:

See you tomorrow. Goodbye. Cheers.