TBPN is a live tech talk show hosted by John Coogan and Jordi Hays, streaming weekdays from 11–2 PT on X and YouTube, with full episodes posted to Spotify immediately after airing.
Described by The New York Times as “Silicon Valley’s newest obsession,” TBPN has interviewed Mark Zuckerberg, Sam Altman, Mark Cuban, and Satya Nadella. Diet TBPN delivers the best moments from each episode in under 30 minutes.
So I was nerding out about this Fed paper because when you told John Collison 80% of businesses are getting no value from AI, I'm glad he wasn't here in person because he was about to throw down.
Speaker 2:He was about to open up a can
Speaker 1:of It was about to be a bar fight
Speaker 2:in
Speaker 1:the In Cheeky the Cheeky Pub. No, seriously, it was a great question because I think we all agree that AI adoption is real, it's valuable, it's happening. But it is a very interesting statistic. And I think it's a mistake for tech people to dismiss this stat because of where it's coming from. Like, it's not coming from some doomer, anti AI blogger who's going for clicks.
Speaker 1:Like, this is the National Bureau for Economic Research. This is a research paper that could be circulated, probably will be circulated within the Fed. And I think that it's already getting quoted by the New York Times in that .com bubble, AI bubble piece. And I'm just I'm thinking it through like this could be something where you see Fed policy or government legislation that sort of mismatched with what is actually happening in reality. And so we should go through some of the stats to actually break this down because the headline is 80% of firms reported that AI was having no impact on their productivity or employment.
Speaker 1:And that's actually like a misquote. What they mean by that is that it's not shaping their hiring plan yet. They actually are using AI. And so, basically, this stat comes from this survey from the National Bureau of Economic Research. And it's pretty interesting because a lot of the polls that you see online are online surveys.
Speaker 1:They run some digital ads and they say, are you a CFO of a company? We don't really care what company. We'll pay you $10 to take this quick survey.
Speaker 2:And what kind of people want to make $10 a week?
Speaker 1:A lot of liars. There's a lot of liars out there who say, I am absolutely a CFO and please send that Amazon gift card right my way. And so for this one, they actually did the work. They called up and ID verified and then also reality checked the position. So if you say, yeah, I'm the chief pirate officer.
Speaker 1:I'm the ninja hero, whatever. You got some fake title there. You're out of the survey. So they did some reality checking and they pulled together 6,000 of these business leaders across firms that are domiciled in The US, UK, Germany, and Australia. The line from John Collison that has been sort of going viral, that was he he dropped it on sources.
Speaker 1:He's I think he said it to us, too. It's a good line. No one wants a refund on their tokens. Everyone is using AI. Their spend is increasing.
Speaker 2:Although I'm sure some CEOs heard that and thought
Speaker 1:I would love
Speaker 2:I kind of do want
Speaker 1:a refund. I'd love a refund.
Speaker 2:I had one team member go absolutely haywire Yeah. And spend $50.
Speaker 1:He's one shot. He claims that he rebuilt our entire ERP, but I fired it up and it didn't even have HTTPS. What's going The
Speaker 2:Mac Mini wasn't even plugged in.
Speaker 1:Yeah. The Mac Mini wasn't even plugged in. He was just chatting. But, clearly, like, there is a disconnect between, like, the Stripe data is very real. The value creation is very real.
Speaker 1:The revenue is very real at the labs. But when just random Joe Schmo, CFOs, CEOs get a call from the Feds, they say, like, yeah, we're not really getting that much value out of AI. And so the questions that you need to dig into, there's actually four key findings. The one headline that The New York Times is pushing is this 80% number. 80% report little impact or no impact on employment or productivity.
Speaker 1:But there's actually a bunch of positive signals. There's a bunch of mixed signals in here. So 70% of firms actively use AI, and particularly younger, more productive firms. Second, while over two thirds of top executives regularly use AI, their average use is only one point five hours per week, And onefour of executives report no AI use at all. Not from Why
Speaker 2:would I need that? I have a telephone.
Speaker 1:The last major finding that we should touch on is firms predict sizable impacts over the next three years. Forecasting AI will boost productivity. Sizable impacts. Productivity increase, 1.4%, which is like it's very sizable if you're an economic researcher, but it's not particularly sizable if you're in like the fast takeoff scenario. Measuring AI adoption is a mess.
Speaker 1:Many people use AI without even knowing that they're using AI because it's buried deeper in SaaS products that they already daily drive. Like, if you're just I run a coffee shop, and I'm using Toast for payment processing. Like there's probably some AI features in there already. And when you go to type in, Okay, we're adding a new cinnamon roll to the menu, there's probably a button now that just says like, Do you want to just generate an image a cinnamon roll? You could upload one still.
Speaker 1:That's probably a feature that already exists. But like we can also just generate one for you and you can probably click that. But you're not like, oh yeah, I'm an AI powered user, just because like you happen to use Toast and Toast happen to have implemented some Gen AI feature that you haven't really dug into yet. So some AI isn't even detectable. You could be talking to a customer support agent on the phone that is AI generated and not be able to tell.
Speaker 1:We talked about that airline interaction that got something like 100,000 likes.
Speaker 2:Grace, the woman that had the interaction, came into the chat yesterday
Speaker 1:Oh, really?
Speaker 2:And said it was real.
Speaker 1:Yeah, it was real. Yes. She had go. Still, think about, she's clearly on X, in tech, like, very AI aware. There are probably tons of people out there that are saying, oh, yeah.
Speaker 1:My job, you know, every once in a while, have to call this service. And now the person that picks up is, like, responding pretty quickly. But I haven't noticed. Haven't They noticed that they're actually interacting with AI or using AI in some capacity.
Speaker 2:Yeah. I still think there's room for a research firm focused entirely on diffusion. So if you had a group of 10 to 20 people that were spending all their time talking to business owners and executives, operators, and getting a sense of how they're actually using this stuff, I think you could put together some really compelling reports around it that would be pretty useful to everyone from AI companies to Wall Street.
Speaker 1:Yeah. AdoptionMax after ClusterMax and InferenceMax. They had to rename it. Apparently, semi analysis can't useMax for some reason. So they do inferenceMax is now inferenceX.
Speaker 1:And everyone was saying, you need to just change it to inference mock, which would have been amazing. But inference x obviously has a much more professional tone to it. What does it mean to actually adopt AI? That's very vague. This paper defines it pretty broadly.
Speaker 1:So machine learning for data processing. So that doesn't even necessarily mean LLMs. That just means ML, which has been around for a very long time. Text generation using LLMs, that's what we think of as ChatGPT. Visual content creation, so diffusion models but also robotics and autonomous vehicles.
Speaker 1:And there's another and there's a category just for other. And firms can select multiple. And so if you selected yes on any of those, you go in the bucket of AI adopter. And 78% of firms in The United States said, yes, they are using AI by this definition. And you can also dig in further.
Speaker 1:So text generation using LLMs is the single most common use case at about 41% of firms. So flip that around. 59% of firms aren't even using LLMs for text generation or proofreading. But again, there's a lot of companies where it's like, yeah, don't generate a lot of text. Across the four countries that were surveyed, 69% of firms totally said they currently use AI.
Speaker 1:I think Australia was behind a little bit, dragging that down. Only 75% of firms expect to be using AI technology sometime over the next three years.
Speaker 2:Tyler's gonna have a heart attack.
Speaker 1:We're gonna bump that up to 75%. And this is like this is weird data. And I you can jump in with your pushback whatever you want, but my point is not that they're right. Like I think that they're wrong to predict this. I think that the AI adoption will be very steep and very dramatic.
Speaker 1:But I just think it's important to recognize that like this is a paper that people will be citing. This is a paper that will shape policy. This is a paper that reveals some misconception about the impact AI is having in firms.
Speaker 3:Yeah. I still think it's just it's so hard to like actually quantify this.
Speaker 1:The perception I think still does matter because I think that there's a little bit of like potential self referentialness here where firms see, oh, like AI adoption's low, I don't need to go and figure out how to adopt it. And so that's something that I'm also like keeping an eye on. The biggest thing was there was a massive diversion in the expected employment impact. So basically, 3% of firms still expect no impact from AI. And that just completely goes against everything everyone's saying in Silicon Valley.
Speaker 1:So there's still a lot of optimism among managers that AI will create more opportunities and new jobs even as some jobs become obsolete. My read on this data is that the tech talking point about 50% of white collar work going away is not a broadly held belief among average business leaders. Now, they might be wrong. I do think AI progress is pacing way ahead of public expectations and most managers are months behind when it comes to understanding frontier capabilities. The bigger takeaway for me is just that this survey may be somewhere self reinforcing.
Speaker 1:I close by thinking about, like, the nature of polling and how do you actually get stronger data on AI adoption. And I was thinking back to the presidential cycle. So during the presidential election, pollsters would call people sort of at random, and they would ask them, Who are you voting for? And a lot of people would say they'd lie, or they wouldn't say, or wouldn't pick up the phone if they were voting for a particular candidate. And so the polling numbers did not wind up matching the final election results very closely.
Speaker 1:And so there was the story about neighbor polling, which was more effective, where instead of calling someone and asking them, who are you voting for? The pollster calls and asks, who do you think your neighbors are voting for? Who's more popular in your community? Who's more popular on your city block, on your street? And that wound up sort of removing the revealed preference, stated preference, and it wound up increasing accuracy.
Speaker 1:And so I'd like to see a survey of AI adoption using this technique. Anyway, we should watch a little bit of a clip from the State of the Union because Donald Trump addressed some of the energy production question with regard to, like, how hyperscalers will be offsetting the impacts
Speaker 4:Many Americans are also concerned that energy demand from AI data centers could unfairly drive up their electric utility bills. Tonight, I'm pleased to announce that I have negotiated the new rate payer protection pledge. You know what that is? We're telling the major tech companies that they have the obligation to provide for their own power needs. They can build their own power plants as part of their factory so that no one's prices will go up.
Speaker 4:And in many cases, prices of electricity will go down for the community and very substantially down. This is a unique strategy never used in this country before. We have an old grid. It could never handle the kind of numbers, the amount of electricity that's needed. So I'm telling them they can build their own plant.
Speaker 4:They're gonna produce their own electricity. It will ensure the company's ability to get electricity while at the same time lowering prices of electricity substantially for you and could be very substantial for all of your cities and towns. You're gonna see some good things happen over the next number of years.
Speaker 2:What's your reaction to that? I think it's a good start. I don't know that it will quell any of the fears around data centers just given that people kind of see the potential for this massive structure going up. They have so much fear about it. And, again, I think it's clearly gonna be necessary to continue to build data centers in heavily populated How areas,
Speaker 1:but would you rank the fears currently? Because I've put I've put my energy bill goes up, and that puts pressure on my income and ability to live my life at pretty much the top. And then the water thing felt secondary, but also important. And then there's the existential fear of like doom and apocalypse. There's also job displacement.
Speaker 1:And then there's also just like I don't like the slop.
Speaker 2:I would rank it on electricity bill going up That's beautiful. Pain today.
Speaker 1:And it's so there's
Speaker 2:fear and it's easy to imagine. Mhmm. And then there's fear around the job loss narrative Mhmm. That is sort of secondary. Yeah.
Speaker 2:And opposing a data center in your local area feels like a way to have some agency around that overall job loss concern.
Speaker 1:Yeah. AI is going to get blamed even if tariffs drive high unemployment. Like if people lose their jobs, AI is going to be a scapegoat, and it's going to be used both by executives to say,
Speaker 2:hey, It's the perfect scapegoat for executives and for people frustrated with the job
Speaker 1:Yeah, yeah. It's like, oh, my business isn't doing poorly right now. I'm laying off people because I'm getting so much benefit from AI. The stock should actually go up. We're more efficient.
Speaker 1:There's going to be a lot of that. But it does feel like it's a little bit early, whereas the like, there are a lot of people that just can hold up their power bill and show you year over year increases. And if that goes away and people don't feel that anymore and they don't have that evidence to share, I think that take gets debunked pretty quickly.
Speaker 3:I would say I I mostly disagree with the idea that, like, rising energy prices is the main, like, reason to be against AI. Mhmm. The rational thing to do then is say, like, okay. Before you build a data center micromanaly, you have to build a power plant. Yeah.
Speaker 3:So then my energy prices go down. Yeah. No one's doing that. If you look at, protests and
Speaker 1:stuff Yeah.
Speaker 3:Yeah. They're not saying, please build a power plant first. They're saying, oh, it's going to destroy the environment or the water stuff or you're going to take all the jobs because it's going to
Speaker 1:like We need to send you to that New Jersey New Brunswick protest. Build the nuclear power plant first.
Speaker 3:So I think it's much more on basically job loss of, like, oh, the AI is stealing the IP of Yeah. Of Disney or whatever.
Speaker 1:Anyway, happy NVIDIA Day to all who celebrate except the bears. Forget them, says take him. He's getting fired up for NVIDIA earnings. That's it's gonna be a fun one today.
Speaker 2:So Now news. This has been destroyed. Is tearing up the timeline. Tearing up
Speaker 1:the timeline.
Speaker 2:A new Guinness World Record, and I wanna ask John if this if you think this should actually count. What
Speaker 1:is this?
Speaker 2:This is a Chinese hypercar going for the I've never heard of this company. Drift ever. That is crazy. But here's the thing. He doesn't he doesn't actually pull out of it.
Speaker 2:Does he just crash? Kind of just U turns. It's like a really fast U-turn.
Speaker 1:I think this counts as a drift. That's definitely drifting.
Speaker 2:U turning counts
Speaker 1:as a drift. If you saw that car going by, you'd be like, wow. That's drifting. HypeTech SSR, formerly Hyper SSR, a high performance all electric two door supercar. I mean, is crazy.
Speaker 1:This is out before the Tesla Roadster. We've never seen a two door supercar electric supercar. 1,225 horsepower goes from zero to 60 in one point nine seconds, and it's set the Guinness Book of World Records for the fastest electric car drift at 213 kilometers per hour, which is what?
Speaker 2:Really, really insane. I feel like you have to actually stay in the stay in the turn and not do a u-turn.
Speaker 1:What do you mean stay in the turn?
Speaker 3:Yeah. I don't think it counts.
Speaker 1:You don't think it counts?
Speaker 3:For what it's worth, don't think
Speaker 2:that counts. If you were drifting Yeah. And I think of drifting, it's you're drifting around a corner, around a turn. And if you were to drift and spin out during the drift, then that doesn't if you were doing if somebody was doing that on a track Oh. You'd be like, you didn't drift around the corner, you spun out.
Speaker 1:Yeah. Okay. Okay. Yeah. The top comments is fastest spin out.
Speaker 1:That's a power slide at best. Gabe Fire whoever called this drifting. That's not drifting. That's losing control. Yes.
Speaker 1:The chat does not does not like the drift, the fake drift. Call Guinness Book of World Records again. Reset reset completely.
Speaker 2:Stolen Drift Valor.
Speaker 1:Stolen Drift Valor.
Speaker 2:Palmer is sharing something from Compound, the research Okay. From their annual meeting. Yes. They're they're showing dollars invested in the top 10 companies versus the other percent as a percent of overall funding. So you can see there's just heavy, heavy, heavy concentration in a few names.
Speaker 2:Is this I'd say overall this is
Speaker 1:Or is this is this CO2?
Speaker 2:No. This is oh, the source is CO2. CO2 is part of.
Speaker 1:Okay.
Speaker 2:They they are part of, I would say, driving this data.
Speaker 1:Part of the problem? Part of the opportunity? I mean, So this is happens in companies
Speaker 2:much of this is about the AI labs just raising more money than any private companies have ever.
Speaker 1:$200,000,000,000. Venture as a class in a good year will do like $400,000,000,000 And across OpenAI at $100,000,000,000 30,000,000,000 for Anthropic, 20 for xAI, then you have a bunch of Neo Labs all picking up $1,000,000,000 each. Like you very quickly get to a few companies raising half of all the money, and that's shown here. It's an incredible amount of concentration. I think a lot of it is due to companies staying private this long.
Speaker 1:The idea of Facebook went public at what was Bill Gurley saying? He's saying Amazon went public sub $1,000,000,000 When Facebook went public at like $60,000,000,000 it was like, wow, crazy. They waited way too long. And now it's like multiple trillion dollar companies are still private, which is just an incredible capital sink. So I don't know.
Speaker 1:Should you even put those in the same bucket? Are even venture bets at this point? If any venture capital fund is putting that in their venture bucket at this point, it feels ridiculous compared to growth scale. I mean, you're you're bigger than probably 90% of the S and P. Like Yeah.
Speaker 1:It's a completely different business.
Speaker 2:Some kind of relevant data. We're about to witness three of the largest IPOs in history. SpaceX is targeting 1 and a half trillion. OpenAI aims for 1,000,000,000,000. Anthropic is valued at 380,000,000,000.
Speaker 2:Combined, they're at 2,900,000,000,000.0 in potential market cap. The scale is unprecedented, but the real problem isn't the market cap. It's the float. Typical IPOs offer 15 to 25% of their shares to the public markets. This creates enough liquidity for price discovery while allowing founders and early investors to maintain control.
Speaker 2:Facebook floated 15% at the 60,000,000,000 that you mentioned and actually traded down pretty much immediately. Right? Google floated 19%. Alibaba floated 15% at 15% float. Here's what these three IPOs would require.
Speaker 2:SpaceX would be 300,000,000,000 or 225,000,000,000. Opening, I'd be a 150,000,000,000. Anthropic would be 57,000,000,000.
Speaker 1:It's lot of smackaroos.
Speaker 2:He was he was yeah. A lot of a lot of dollars. He was comparing that to Saudi Aramco, Alibaba, and SoftBank, which were combined at the IPO. I believe Saudi Aramco raised 29,000,000,000 at a $1,700,000,000,000 market cap. So he's making the case.
Speaker 2:You can't really kind of model how the public markets will absorb these companies off of Saudi Aramco Yeah. Even though from a sort of like top line market cap standpoint, it is a good proxy. We'll see what the labs end up doing. They are obviously wildly capital intensive businesses. And you can imagine they raise quite a bit more than the Aramcos or the Alibaba's.
Speaker 1:Saudi Aramco was such a wild ride. I feel like they were trying to IPO for like
Speaker 2:a The San Francisco company?
Speaker 1:It is. Yeah. Founded in in California. Like, I remember hearing Saudi Aramco IPO rumors in, like, 2015. I think it actually kicked off in 2016.
Speaker 1:They finally got out in 2019. It was I mean, it was largest IPO ever. There were, like, a million investment banks attached, like, going all over the world marshaling capital.
Speaker 2:Anthropic dials back AI safety commitments. Competitive pressure prompts it to pivot away from a more cautious stance. Anthropic, the AI company known for its devotion to safety is scaling back that commitment. The company said Tuesday, it is softening its core safety policy to stay competitive with other AI labs. Anthropic previously paused development work on its model if it could be classified as dangerous.
Speaker 2:But it said it would end that practice if a comparable or superior model was released by a competitor. Given that they are at the frontier, that kind of opens them up to, I would say, perpetually kind of avoiding some of their prior policies.
Speaker 1:Sure, sure, sure.
Speaker 2:The changes are a dramatic shift from two and onetwo years ago when the guardrails Anthropic published guiding the development and testing of its new models established the company as one of the most safety conscious players in the space. Anthropic faces intense competition from rivals, which regularly release cutting edge models. It's also locked in a battle with the Defense Department over how its clawed suite are used after it told the Pentagon it couldn't be used for domestic surveillance or autonomous lethal activities. Mhmm. Anthropic said the safety policy changes an update based on the speed of AI's development and a lack of federal AI regulations.
Speaker 2:Anthropic, started as a AI safety research lab, has battled the Trump admin by advocating for state and federal rules on model transparency and guardrails. The admin has, of course, sought to curb states' ability to regulate AI. The obvious sort of criticism here would be that you were heavily focused on safety when you were far away from, I would say, leading in AI. And so switching up now that there's actually
Speaker 1:Switching real up on their day one.
Speaker 2:Switching up on their day one.
Speaker 1:The safiest.
Speaker 2:Now that there's real competition.
Speaker 1:Are they forgetting where
Speaker 2:they're Feels little self serving.
Speaker 1:It's possible the money changed them.
Speaker 2:It's may it's possible the money changed them. It's possible they they always plan to switch up on their day one.
Speaker 1:Maybe. Maybe.
Speaker 2:Once they got once they got to to the level they're at now.
Speaker 3:It could just be that they realize like alignment's pretty easy and we don't need to
Speaker 2:worry What's about this new study that's showing like they they were doing some war war game simulation and and almost every model was choosing to to drop nukes.
Speaker 1:Really? That's crazy. That's not good. I don't like that at all. The interesting impetus of like this line around the policy environment has shifted towards prioritizing AI competitiveness and economic growth, while safety oriented discussions have yet to gain meaningful traction at the federal level.
Speaker 1:I still feel like there's a lack of communication around what safety orientation at the federal level means. Like, yes, okay, we'll pass the bill that says the AIs can't kill everyone. Like
Speaker 2:Well, yeah.
Speaker 1:Obviously, everyone supports that. But, like, what does it actually mean in practice?
Speaker 4:Because
Speaker 2:I think part of why it's Oh, that's
Speaker 1:dangerous means million things to different people.
Speaker 2:Like Yeah. Part part of why I think it's fascinating is they've been pushing for regulation as much regulation as possible Yeah. Seemingly.
Speaker 1:Yeah.
Speaker 2:Yeah. And they're kind of saying, hey, we're not getting what we want. Yeah. So now we're now we're just Yeah. We're not even gonna play by the own set of rules that we created for ourself Yeah.
Speaker 2:Because we just want to compete and win.
Speaker 1:Yeah. I mean, like going back to the protesters, there are protesters that would say, like, training on intellectual property is dangerous. It's dangerous to my career as a writer. It's dangerous to my career as an illustrator. And so, like, this this question, like, danger is just too vague, and and and no one has really been able to concretize it in a meaningful way.
Speaker 1:And I think that's why it's not getting traction on Capitol Hill.
Speaker 3:Yeah. I I think there's there's just like so many ways that you can define safety. Mhmm. Like, Yeah. If you read Dario's essays, this thing he brings up over and over is like, okay, we can't let AI get in the hands of, like, authoritarian government.
Speaker 3:Sure. So there's like a real, like, safety narrative that you could do, which is that, like, regardless of if our models are like pretty safe, we they still need to be better than like China's Yeah. For example. Yeah. Because if China gets ahead of us Mhmm.
Speaker 3:Authoritarian government. Right? It's like very bad. For sure. So even if, you know, we're releasing models that are are are less like safe than we would like, as long as they're better than China's, that's still like a safety pro safety issue, right?
Speaker 2:Except they'll just be distilled within six weeks.
Speaker 3:Yeah. But obviously, I would be very surprised if Anthropic keeps the same guardrails of like, API access.
Speaker 1:Well, BucoCapital bloke has a solution. He says it's simple. We kill Claude.
Speaker 4:It's simple. We kill the Batman.
Speaker 2:Well, that was in regards to the SaaS pocalypse.
Speaker 1:Okay. Okay. Who knows? There's so many headlines and the timeline moves so quickly.
Speaker 2:Anthropic antagonizing the Department of War, the open source community, the entire media industry, the general population, other developers, other labs, foreign governments, and nearly every single person on Earth. What is the plan here? Sell Claude subscriptions to aliens?
Speaker 1:Edward is it ain't easy having principles.
Speaker 2:Hackers used Claude to steal a 150 gigabytes of Mexican government data.
Speaker 1:It's crazy.
Speaker 2:They told Claude they're doing a bug bounty. Claude initially refused.
Speaker 3:Mhmm.
Speaker 2:A hacker just kept asking and manages to successfully steal some documents. Apparently, it's four state governments, a 195,000,000 taxpayer records, voter records, government credentials.
Speaker 1:Anthropic investigated the claims, disrupted the activity, and banned the accounts involved. The company feeds examples of malicious activity back into Claude to learn from it. In this instance, the hacker was able to continuously probe Claude until it was able to jailbreak it. I was listening to someone someone talk about like, I I like like like like the ability to jailbreak has generated me, like, tens of thousands of dollars in profit. It was kind of like a hustle, like, mindset guy.
Speaker 1:And I was just laughing because whatever you're doing after you jailbreak it is probably not good, so you should probably stop. But he was talking about, like, I I can sell so many more courses now that I've jailbroken Chad GPT or whatever.
Speaker 2:Duran says not to worry. They'll hit usage limits before anything bad can happen. This was interesting. Yes. Rob Wiblin had a guest on his podcast.
Speaker 2:The guest is talking that saying, every AI lab is working to make their AI helpful, harmless, and honest. The guest thinks this is a complete wrong turn and aligning AI to human values is act act actively dangerous. Today, a nominative determinism because the guest's name is Max Harms.
Speaker 1:Max Harms. Feel like with that name, maybe you gotta go with Maxwell or something. I don't know. Perplexity Computer. Computer.
Speaker 1:Computer. Computer.
Speaker 2:Launch Launch Perplexity Computer. Vibrio.
Speaker 1:What is Perplexity Computer? Let's pull up this video. Perplexity, the official account says, Perplexity Computer. Computer unifies every current AI capability into one system. It can research, design, code, deploy, and manage any project end to end.
Speaker 1:Okay. So it should be able to get a soundboard app in the App Store. Right? Manage any project, code, deploy, design, research. It should be able to do that from start to finish.
Speaker 1:One prompt, soundboard in the App Store using the TBP and sound effects, which are available online, which we have up there. This is a good benchmark. Let's give it a try. And you can give it a try at Perplexity. Go check it out.
Speaker 2:I'm just very curious to see how this does. It it feels like the again, going from consumer LLMs to a net new product that is objectively just as competitive. And we'll see. Best sellers on Substack for finance are all doomers.
Speaker 1:We got to do TBP.
Speaker 2:Of course, treaty is not
Speaker 1:in This is so obvious. No. This is we need to treat some of this
Speaker 4:a
Speaker 2:little He's not
Speaker 1:a doomer. There's bunch bullish. Very AI bull. But definitely shot to the top of virality and top of the charts on the back of doom. I live this on YouTube.
Speaker 1:Like, you put a negative title up and you just get 10 times more views. But they're lower quality, and so you gotta balance all that out. It's really hard to go viral with something like, everything's fine. Everything's everything's going well. Don't Don't worry.
Speaker 1:Don't click this because you're scared. Click this because everything's kind of the same as it always has been, and you're going be fine. And say, Hey, this stuff's cool, but it's not really going to change that much. It's going be to pretty incremental. Like that is not getting clicks.
Speaker 1:You need to be telling this whole tale. You need be spinning a yarn. It's a bull market in yarn spinning, folks. Get ready. Get out the yarn and start spinning.
Speaker 2:So Good this
Speaker 1:news from
Speaker 2:Get gong.
Speaker 1:Get the gong. Okay. Hit me. Tell me.
Speaker 2:What am I ringing up? Steven over at WAP says, we're excited to announce that Tether, the largest stablecoin company in the world, is making a strategic investment of 200,000,000 into WAP, valuing us at $1,600,000,000. Our partnership with Tether marks a major step in building the world's largest Internet market. Tether is committed to enabling everyone in the world to participate in the new Internet economy. The way humans work and create value is changing fast.
Speaker 2:The world needs both an open Internet market, giving people a platform to conduct business, as well as a transparent payments network. Fast, cheap, global.
Speaker 1:Exactly. And so, yeah, fast, cheap, global.
Speaker 2:Mike Isaac is saying what we're all thinking. Yeah. Ready for this to be over. BBH talking about the Warner Brothers Discovery Netflix.
Speaker 1:It's in the paper every single day. Every single day. Paramount increases Warner Bid. We get it. You guys wanna acquire this company.
Speaker 1:Mark Zuckerberg is planning a stable coin comeback. They also have a banger deal with AMD. And if you head to the bar this weekend and you drink too much, you should just say that you were the victim of a distillation attack. That's the correct turn of phrase. Anyway, thank you for watching.
Speaker 1:Leave us five stars in Apple Podcast on Spotify. Have a wonderful day.