TBPN

Sign up for TBPNโ€™s daily newsletter at TBPN.com

  • (01:52) - Citrini Memo Reactions
  • (09:35) - Is Doordash Cooked?
  • (12:51) - ๐• Timeline Reactions
  • (20:59) - Kim K Enters Energy Drinks
  • (22:23) - ๐• Timeline Reactions
  • (29:01) - Jane Street Sued
  • (40:17) - Patrick Collison is the co-founder and CEO of Stripe, the global payments and financial infrastructure company powering millions of businesses. A longtime advocate for scientific and technological progress, he also co-founded the Arc Institute and writes frequently about economic growth, research, and innovation. John Collison is the co-founder and President of Stripe, where he leads strategy and product across payments, banking-as-a-service, and financial tooling. He previously co-founded Auctomatic, which was acquired in 2008, and focuses on expanding Stripeโ€™s global reach and financial infrastructure capabilities.
  • (57:46) - Bill Gurley, a general partner at Benchmark and author of "Runnin' Down a Dream," discusses his transition from a conventional tech job to venture capital, emphasizing the importance of pursuing one's passion to avoid career regret. He highlights six principles for a fulfilling career: chasing curiosity, honing one's craft, developing mentors, embracing peers, going where the action is, and giving back. Gurley also addresses the rapid public consciousness of AI advancements, noting its unprecedented speed compared to previous tech waves, and underscores the necessity for individuals to be hyper-curious and continuously learning to thrive in evolving industries.
  • (01:23:18) - ๐• Timeline Reactions
  • (01:29:16) - Ivan Zhao, co-founder and CEO of Notion, is a Chinese-Canadian entrepreneur with a background in cognitive science from the University of British Columbia. In the conversation, he discusses the launch of Notion's Custom Agents, autonomous AI teammates designed to handle repetitive tasks across various platforms, enhancing productivity and collaboration.
  • (01:45:09) - ๐• Timeline Reactions
  • (01:51:15) - Stefano Ermon, co-founder and CEO of Inception Labs, discusses his background in generative AI research at Stanford, including co-inventing diffusion models. He explains how Inception's diffusion-based language models generate text by refining entire sequences simultaneously, resulting in speeds over 1,000 tokens per second on NVIDIA GPUs. Ermon highlights the models' efficiency and scalability, making them ideal for latency-sensitive applications like coding autocomplete and voice agents.
  • (02:01:07) - James Cadwallader, co-founder and CEO of Profound, discusses the company's recent $96 million Series C funding round, which values the company at $1 billion. He highlights the launch of Profound Agents, customizable autonomous tools that enhance marketing efficiency by enabling brands to monitor and influence their representation across AI platforms. Cadwallader emphasizes the transformative impact of AI on brand discovery and the necessity for marketers to adapt to this evolving landscape.
  • (02:13:59) - Scott Wu, co-founder and CEO of Cognition AI, discusses the company's significant growth, noting that enterprise usage has more than doubled in the past six weeks, driven by the adoption of AI agents capable of handling end-to-end tasks. He highlights the latest launch's focus on enhancing user experience by addressing known frictions, introducing features like automated testing and improved integrations. Wu also emphasizes the importance of optimizing various aspects of the software engineering workflow, including testing and review processes, to further improve efficiency.
  • (02:30:39) - Rune Kvist, co-founder and CEO of the Artificial Intelligence Underwriting Company (AIUC), discusses the company's mission to underwrite superintelligence by developing standards and insurance products for AI agents. He highlights the challenges in insuring AI systems due to unpredictable risks and emphasizes the importance of creating a standardized framework to manage these uncertainties. Kvist also mentions AIUC's recent launch of the world's first insurance policy for AI agents, in collaboration with Eleven Labs, aiming to address concerns like hallucinations leading to financial losses and data leakage.
  • (02:39:02) - Reiner Pope, CEO and co-founder of MaddX, discusses his company's development of high-throughput chips tailored for large language models, emphasizing the need for a from-scratch design to achieve optimal performance. He highlights the constraints in the current market, particularly the limited silicon wafer supply, and how MaddX's approach aims to maximize throughput per dollar and per watt. Pope also addresses the challenges posed by existing technologies like Nvidia's CUDA, noting that while CUDA offers backward compatibility, it restricts hardware innovation, whereas MaddX's specialized design offers greater efficiency for frontier labs willing to adapt their software.
  • (02:53:49) - Devansh Pandey, co-founder of Standard Intelligence, discusses the company's approach to pre-training computer use models by capturing 30fps video of user interactions, including screen recordings, key presses, and mouse movements, to create a comprehensive dataset for training general models capable of performing diverse computer tasks. He highlights the potential applications of these models, such as automating repetitive tasks like form filling and enhancing CAD design processes, and emphasizes the advantages of video-based training over text-based methods, noting that graphical user interfaces are designed for human interaction and that video data inherently captures temporal aspects of user behavior. Additionally, Pandey shares insights into the company's data collection methods, including the use of an application that records user screens and inputs, and discusses the potential for their models to generalize to various computer-based tasks, including applications in robotics and self-driving technology.

TBPN.com is made possible by:

Ramp - https://Ramp.com

AppLovin - https://axon.ai

Cisco - https://www.cisco.com

Cognition - https://cognition.ai

Console - https://console.com

CrowdStrike - https://crowdstrike.com

ElevenLabs - https://elevenlabs.io

Figma - https://figma.com

Fin - https://fin.ai

Gemini - https://gemini.google.com

Graphite - https://graphite.com

Gusto - https://gusto.com/tbpn

Kalshi - https://kalshi.com

Labelbox - https://labelbox.com

Lambda - https://lambda.ai

Linear - https://linear.app

MongoDB - https://mongodb.com

NYSE - https://nyse.com

Okta - https://www.okta.com

Phantom - https://phantom.com/cash

Plaid - https://plaid.com

Public - https://public.com

Railway - https://railway.com

Restream - https://restream.io

Sentry - https://sentry.io

Shopify - https://shopify.com/tbpn

Turbopuffer - https://turbopuffer.com

Vanta - https://vanta.com

Vibe - https://vibe.co


Follow TBPN: 
https://TBPN.com
https://x.com/tbpn
https://open.spotify.com/show/2L6WMqY3GUPCGBD0dX6p00?si=674252d53acf4231
https://podcasts.apple.com/us/podcast/technology-brothers/id1772360235
https://www.youtube.com/@TBPNLive

What is TBPN?

TBPN is a live tech talk show hosted by John Coogan and Jordi Hays, streaming weekdays from 11โ€“2 PT on X and YouTube, with full episodes posted to Spotify immediately after airing.

Described by The New York Times as โ€œSilicon Valleyโ€™s newest obsession,โ€ TBPN has interviewed Mark Zuckerberg, Sam Altman, Mark Cuban, and Satya Nadella. Diet TBPN delivers the best moments from each episode in under 30 minutes.

Speaker 1:

You're watching TVPN. Today is Tuesday, 02/24/2026. We are live from the TVPN UltraDome, the temple of technology, the fortress of finance,

Speaker 2:

the capital of capital.

Speaker 1:

We're running down a dream today. We are surviving the Suttrini apocalypse. Live to fight another day. A lot of chaos in the markets. A lot of reflection about the story behind the story, what happened.

Speaker 1:

We had a lot of fun debating the Suttrini report. A lot of good stuff in there, some other kind of crazy stuff that sort of got everyone twisted in a knot. But

Speaker 2:

It didn't stop the markets

Speaker 1:

It from did become the current thing, and I think a lot of people were talking about it. I mean, my feed was covered in Suttrini stuff, but today is a new day, and there's a ton of new tech news. First, let me tell you about ramp.com. Time is money. Save both.

Speaker 1:

Easy use corporate cards, bill pay, accounting, and a whole lot more. The GOATs. And then second, I wanna pull up the linear lineup because, boy, do we have a show for you today, folks. We got the Collison brothers joining together at 11:40. Then we going over Himself.

Speaker 1:

Bill Gurley, the height monger himself. He's six foot nine. He mogs me.

Speaker 3:

Oh, he mogged me?

Speaker 1:

It's over for me.

Speaker 3:

He mogged me.

Speaker 1:

That's why we said you can't come to the studio. You can't be seen next to John Coogan in person. You're staying remote.

Speaker 2:

The right pair of cowboy boots. Yeah. I might have you might have it.

Speaker 1:

The right pair of lifts as well inside those cowboy boots. Then we got Ivan from Notion and a whole bunch more funding announcements during the lightning round. Rune, Reiner, Devanche and a ton of other folks are joining. It's a crazy show.

Speaker 2:

James from Profound.

Speaker 1:

James is yeah. James.

Speaker 2:

New unicorn. Right.

Speaker 1:

Very fun. Well, Linear, of course, is the system for modern software development. 70% of enterprise workspaces on Linear are using agents. So the story behind the Suttrini story, I had some takeaways. My big update mentally was just that, you know, we are the sell side research now.

Speaker 1:

Basically, like,

Speaker 2:

new We as in X

Speaker 1:

us. X and Substack. Like, independent researchers and analysts are really moving the markets. I feel like Ben Thompson has been a source of of alpha for the market for a long time. He's been a source of investment theses, but he doesn't put a buy or sell rating on things.

Speaker 2:

And often Long it's term.

Speaker 1:

Long term. Exactly.

Speaker 2:

Like, here's how strategies are converging.

Speaker 1:

Exactly. Here's how

Speaker 2:

the market is evolving. Exactly. Make your own decision.

Speaker 1:

Yes. And then I see like semi analysis is thinking more in, a couple years out, and it's still, there are they they get held accountable for, oh, you said Microsoft is gonna do this, they did that, blah blah blah. Like and and they have a different model that they actually sell to hedge funds. And so they're very much in the research business. But what's interesting about Semi Analysis and a lot of these other independent analysis firms is that they're not sitting inside banks.

Speaker 1:

Like, we're we are very much used to sell side research being done by Morgan Stanley or Bank of America, Goldman Sachs. You get these equity research reports that your friends send you the PDFs for because you can't afford them. No. Seriously, if you're working in the industry, get the sell side research report on your industry as fast as possible. It's very, very informative.

Speaker 1:

There's always good data in there. But, yeah, my my my big update was like, wow. Okay. This is like a viral post that completely broke containment. There's people making TikToks about it now.

Speaker 1:

And also, it's on the cover of the Wall

Speaker 2:

Street Cells. Yeah. Doom sells.

Speaker 1:

Doom sells. Yeah. Yeah.

Speaker 2:

He oversell.

Speaker 1:

That's the yeah. That's one of the narratives. And then there was this funny funny thing about like, well, it's just one it's just one scenario. It's just one scenario. Why it's low probability.

Speaker 1:

And then

Speaker 2:

Let's pull up Eric's

Speaker 1:

Eric Soufri was like was like, yeah. It's just one scenario, but you only gave us one scenario, and you spent a hundred hours on that scenario. And so, like like, what do you expect people to take away from it except, like, this is the one scenario that you think is most most worth considering? But, of course, you know, it is possible that software is cooked, everything's cooked. And if there's a 5% chance that everything's cooked, yeah, the market should probably sell off by a couple percent.

Speaker 1:

And, you know, the market didn't even really sell off a couple percent. Like, a couple names went down a few percent. Some of them already popped back up. Markets, I think, doing pretty well today. Yeah.

Speaker 1:

Green on the Dow, green on the Nasdaq, and a lot of green on that ticker down there, which is, of course, provided by public.com Investing for those who take it seriously. They got stocks, options, bonds, crypto, treasuries, and more with great customer service. And I'm also going to tell you about Okta. Okta helps you assign every AI agent a trusted identity so you get the power of AI without the risk. Secure every agent.

Speaker 1:

Secure any

Speaker 2:

Let's head over to Derek Thompson.

Speaker 1:

The Thompsonator.

Speaker 2:

He said, the Thompsonator. He says, I really want people to see the story above the story here, which is that whether you're reading Satrini or listening to Jamie Dimon at a cocktail party, the conversation about AI is a marketplace of competing science fiction narratives. That's not to say I think the technology is a parlor trick. We covered this a couple weeks ago. He's feeling the AGI that might be a little bit putting it too aggressively.

Speaker 2:

But certainly, he sees the potential impact. But Derek says, but rather that the level of uncertainty is so high and the quality and supply of real world, real time information about AI's macroeconomic effects so paltry that very serious conversations about AI are often more literary than genuinely analytical. And I think that observation sets up another important point. I feel lucky to be able to have conversations about the frontier of AI with executives and builders at Frontier Labs, economists, investors, and other AI folks at off the record dinners where important truths can theoretically be shared without risk, I can't emphasize enough that nobody knows anything

Speaker 1:

Except for us.

Speaker 2:

Is about as close to the reality here as three words are going to get you.

Speaker 4:

It's skill issue.

Speaker 2:

Nobody knows what's going to happen this year or next year or the year after that. There is no secret cigar filled room of people

Speaker 1:

Except for us.

Speaker 2:

Except except the backroom. Think we do Knowledge mogged. Cigars back there. You have unique access to some authentic postcard from the future. When you drill down underneath the bluster, the boosterism, the fear, the anxiety, what's there at the bottom is genuine uncertainty, a vacuum into which storytelling is flooding.

Speaker 2:

The frontier labs don't really know what they're building exactly. But we do. And economists don't really know how to model the thing they claim they're building, but we do. I wish more people talked about and thought about this subject through that sort of lens. We're trying to model the economy wide effects of a technology whose properties the Frontier Labs can't even really describe yet.

Speaker 2:

Whatever you think of AI today, be prepared to change your mind soon. Yeah, this was something with All Up yesterday is I didn't When I asked him why do you think that so many of the Internet predictions were deeply wrong? His answer was it's just a continuum. AI is just a continuum. And so, like, give it more time, basically.

Speaker 1:

Yeah. The rebuttal that I heard from him when you said that, he was like, well, no. Like, look at all those predictions did come true. And it was like, yeah, but over twenty years, which is like wildly different than two years because the the fear is on

Speaker 2:

called the the twenty twenty

Speaker 1:

eight Exactly.

Speaker 2:

That's crazy.

Speaker 1:

Exactly. And so Also So if you tell me

Speaker 2:

Also, so many institutions just adapt it.

Speaker 1:

Yeah. Like, you go to somebody and you say, hey. In in in twenty years, your job is gonna be radically different. They're like, I hope so. Like, I'm gonna be super bored doing the same thing for the twenty years.

Speaker 2:

Don't worry. I'll be on the next

Speaker 1:

years, there's gonna be no industry that you're currently in. Everyone's gonna be like, oh, okay. Like, that's crazy. It's wildly different to be like, you have twenty years to adjust what you do. Like, if you're you know, you're if you're in Hollywood and you're like, okay.

Speaker 1:

I gotta learn digital filmmaking. I gotta learn how to integrate CGI. I gotta learn AI as a tool. That's way different than just like, next year, we will be one shotting Hollywood films, and we will you will have no employment prospects whatsoever, not even as a prompter, because the labs will be prompting them themselves for for for AI videos. And maybe that's possible, but I have a feeling that it's just like it's not a year away.

Speaker 1:

It's not two years away. It's a little bit farther, still on the ten year camp, still on the Kurzweil time lines. But interestingly, I'm like I'm impatient about it. Like, I want it to go faster. I want the acceleration.

Speaker 1:

I want the progress. I think the progress is good. So I'm not like a doomer or pessimist. I'm just like trying to grapple with the fact that I've see I had to wait four years in between GPT-three and models being good enough to not hallucinate. I had to wait another four years between, like, the early Dali experiments and, like, the nano bananas.

Speaker 1:

Like, it's it it it has felt like like something happens, I'm like, oh, wow. Like, okay. Like, AI can generate images, but it's sort of sloppy. And then I wait, like, four years, and it's like, okay. It's, a lot less sloppy, but it's, like, still not, like, dialed.

Speaker 1:

Like, it's it went from 90% to 99%, and He said, okay. Fine. While I'm here, the DoorDash example is just unbearably dumb. He is a believer in the power of DoorDash to weather the AI storm. I saw that the DoorDash CEO put out a a a like an SEC letter to to the investors.

Speaker 1:

Did you see that? Like, a full PDF filed with the SEC. Like, this is this is sort of guidance, but telling the investor base, like, here's what's not gonna change, like, basically disregard the the Sattrini report.

Speaker 2:

Disregard sci fi Doomers. Slot.

Speaker 1:

He says, set aside for now the question of agents and aggregation. That's a post. That is definitely in my mental queue. What is notable about the assertion is the total denial of any positive reason for DoorDash to exist or and to be so successful. There's no awareness that door DoorDash provides provided a massive consumer benefit, restaurant food at home, from scratch.

Speaker 1:

I like Keith Ruboy's take that DoorDash is the I'm hungry button on your phone. And then there's a whole bunch of crazy things that you have to do to make that happen, make that button work. I ordered DoorDash last night. I felt like I did it in protest of the doom. I was like, I'm still supporting.

Speaker 1:

I'm riding with DoorDash. There's no awareness that DoorDash provided a massive consumer benefit from scratch, that DoorDash massively increased the addressable market for restaurants, or that DoorDash provided brand new jobs for millions of drivers. Instead, the article sort of takes it as given that DoorDash exists and that it is a rent extractor preying on weak willed humans and their habits. This is the exact sort of view taken by some of the most frustrating anti monopoly activists. All large successful tech companies exist not because they created a market with virtuous cycles solving all kinds of thorny problems along the way, but rather because the government didn't regulate hard enough.

Speaker 1:

I was thinking about in antitrust regulation, you know how they'll stop two firms from joining because that will create a monopoly, but they don't really have a tool in the tool chest for stopping a company from just shutting down and stopping competing. Like, if Xbox goes away, as people are predicting, doom around Xbox, PlayStation gets a lot more powerful, obviously. It's like the only game on the block. And so should the FTC have a hammer to be like, no, you got to lock in, Asha. You got to make more Xbox games.

Speaker 1:

You got to compete harder. We want you to we want you to get GTA six out exclusively on Xbox faster Yep. To put the screws to Sony.

Speaker 2:

You don't have time to spend three months

Speaker 5:

gaming? No.

Speaker 1:

No. Lock in. Give us a new Halo. Give us a new Modern Warfare. Give us a new Fable.

Speaker 1:

I don't know. What are the other great Xbox games throughout the years? I was never that big of an Xbox gamer.

Speaker 2:

Never owned an Xbox.

Speaker 1:

Never owned an Xbox. Wow. Oh, I'm a gamer. No. You never say that.

Speaker 1:

I was Working.

Speaker 2:

I wasn't didn't have the what were what was an Xbox back then?

Speaker 1:

It was like $304,100 bucks. Yeah. Expensive. Anyway, let me tell you about Gemini 3.1 Pro. Gemini 3.1 Pro is here with a more capable baseline.

Speaker 1:

It's like it's great for super complex tasks like visualizing difficult concepts, synthesizing data into a single view, or bringing creative projects to life. I was thinking about the, like, the SaaSpocalypse in the context of the fact that

Speaker 2:

Today's launches?

Speaker 1:

No. There are a lot of launches. But

Speaker 2:

no, the disconnect in the SaaS apocalypse is that AI native SaaS is getting funded Yes. At an insane rate Yes. While you have these massive sell offs in the public markets.

Speaker 1:

Yeah, is a

Speaker 2:

little And bit of there's private companies Yeah. That are getting lots and lots and lots of funding that if they were public would have traded down 20% over the last week or so. Yeah. You guys continue.

Speaker 1:

The the thing I was I was thinking about was there's this whole idea that, like, you'll be able to build your own, like, CRM or your own ERP and vibe code it. And open source CRMs exist. Like, there's there's one called Suite CRM. There's Odo. There's ERPNext.

Speaker 1:

There's Plane for, for task management, Open Project, Redmine. Like, there are open source alternatives to almost every piece of software. There's there's an open source Photoshop that people use on Linux, and they've never really gotten adoption. I used an open source forum software for a while, and very quickly, I called the person that was maintaining it. Was like, I'll pay a thousand bucks to just, like, do this for me.

Speaker 1:

And then they it became a managed service very quickly.

Speaker 2:

And There's an open source Capybara simulator.

Speaker 1:

Is that open source? No. No. But but it's it's interesting because, like like, open source has always been this, like, pressure on SaaS, and it's always withstood that. And, like, yeah, maybe, like, if you can just prompt it and it feels like emailing your your SaaS provider to reconfigure things, like, that is a real pressure.

Speaker 1:

But I think it's underrated that that open source CRMs have existed for decades and never really taken off because there's something else that's valuable there. But Tyler has a

Speaker 4:

Yeah.

Speaker 1:

Think that's gonna like get slopped.

Speaker 6:

I think that's like mostly cope.

Speaker 1:

Like, the

Speaker 6:

the comp to open source stuff.

Speaker 1:

Why?

Speaker 6:

Because it's just annoying to maintain. So no one ever does it. And you're kind of just paying whoever to maintain it. Right?

Speaker 1:

Well, no. But it No. No. No. In open source, like, you don't need to pay someone to maintain it.

Speaker 1:

You can just use the open source

Speaker 6:

source version of the open source thing? Yeah. It's like you're basically paying someone to maintain it.

Speaker 1:

Yeah. Yeah. Host it. Yeah. Do all these things.

Speaker 1:

Make sure uptimes

Speaker 6:

But, like, models keep getting better, and they'll just, like, do

Speaker 1:

all They'll that for do all that for you.

Speaker 6:

And it's like yeah. I think that's, like, very obvious.

Speaker 7:

Yeah. It it it

Speaker 1:

it it is possible that you would just say like, hey, run-in a loop and just go around and fix everything. And if there's uptime or security patches, like patch them immediately. Well, there's there's two

Speaker 2:

there's two narratives. Yeah. Right? There's the okay. Everyone will just vibe code everything in any department.

Speaker 2:

Yeah. You can just have an employee just make the software, tell the agent not to make mistakes. Yeah. And or tell the agent, hey, fix this thing. So that's that's one thing.

Speaker 2:

Yeah. And I feel like that is a maybe a part of the sell off. But the bigger reason for the sell off is maybe what Derek Thompson is talking about Mhmm. Which is that like the world is getting weirder.

Speaker 1:

Mhmm.

Speaker 2:

You a lot of people are feeling the acceleration.

Speaker 1:

Mhmm.

Speaker 2:

And if you just don't know what the world looks like or what work looks like in five years, you wanna take some risk off. You're not willing to pay the same revenue multiple that you were Yeah. Three years ago.

Speaker 1:

Yeah. I do want to dig into that point that you mentioned earlier a little bit more, which is like you have the Tyler philosophy of like you could vibe code everything and the agents will be able to go around and maintain and everyone will have personalized software, individuals value accrues to the person using the software, but then also the lab providing the software, the inference. And then there's like the private markets boom right now in AI enabled software, where companies are saying, well, we were able to pull our road map way forward. We got to an MVP in a weekend, and we're able to ship features way faster. So when we onboard new clients and they ask for something, it's like, boom, and we get it done in a few days as opposed to a few weeks of engineering sprint.

Speaker 1:

And so the narrative is like, we're moving faster and we're creating like AI enabled products that couldn't exist otherwise. And it feels like both of those can't be true. So I don't know which way it Yeah. Will

Speaker 2:

Ben Thompson called out the real estate example. He took a segment out. This is from Citrini. Even places we thought insulated by the value of human relationships proved fragile. Real estate where buyers had tolerated I'm saying this in an extra dramatic voice.

Speaker 1:

Love it.

Speaker 2:

Where buyers had tolerated five to 6% commissions for decades because of information asymmetry between agent and consumer crumbled once AI agents equipped with MLS access and decades of transaction data could replicate the knowledge base instantly. And then Ben Thompson says, the real estate example makes the exact opposite point. The author thinks it does. The truth is that the Internet already obsoleted real estate agents in terms of information flow. You can go online right now and get a listing of every house for sale with pictures, its full history, etcetera.

Speaker 2:

There is no information asymmetry, but rather information abundance. The fact that real estate agents still exist despite that shift is actually one of the more compelling arguments that humans will remarkably will be remarkably resourceful in giving in terms of giving themselves jobs to do even in areas where they ought to be pointless.

Speaker 1:

Yeah. How hard is it to disintermediate a real estate agent? Does it happen on the buy side or the sell side? Like, if I find a place on Zillow and I go knock on the door and so or I write them a letter and I say, hey. I'm I'm I wanna buy this, but I don't have a real estate license and I'm and I'm not using a realtor and I don't wanna pay a fee, will they be like, cool?

Speaker 2:

I think I mean, you can do it from either side.

Speaker 8:

Mhmm.

Speaker 2:

But I will just say the reason that you don't Yeah. Is that I'm in process.

Speaker 1:

Yeah.

Speaker 2:

I'm in currently in escrow Yeah. On a property.

Speaker 1:

Yeah. Yeah.

Speaker 2:

And the guy representing me is gonna make a lot of money but he's extremely helpful and he does a lot of real estate transactions. I don't do any. I mean, I don't I've done one in my life prior to this. So like he's in it's like yeah, technically entrepreneurs could negotiate their own legal docs with Claude.

Speaker 1:

We have a buddy who got a real estate license, right? Didn't Spencer from Day Job?

Speaker 2:

Oh, they did. Yeah. It's So possible. I don't know if

Speaker 1:

he's doxing his real estate license, but

Speaker 2:

Well, yeah. And Spencer is probably a lot better now that he can use ChatGPT or Gemini or any of these models to do stuff like this. But that being said, you're paying for effectively therapy throughout the deal Mhmm. And like general guidance.

Speaker 1:

Yeah. I can't be your therapist.

Speaker 4:

I don't know.

Speaker 1:

Doable. I don't know. Tyler, will you ever use a real estate agent or will you ban them on principle? Go direct.

Speaker 6:

I mean, seems like I I think models can do this.

Speaker 1:

Okay. We'll see. Over what timeline? Like Total real estate commission

Speaker 6:

buying a house in the next two years.

Speaker 1:

Yeah. But So houses will be bought over the next two years. So what will the fall in real estate commissions be over the next two years?

Speaker 6:

I think, like, okay. You're you're seeing a lot of these like big rounds of the big labs. All all the researchers are be buying houses. I think a lot of them are gonna try to do it without real estate.

Speaker 1:

You think so?

Speaker 6:

Yeah. I would Yeah. Interesting. I I'm sure that someone's gonna write, like, a cool blog post about this.

Speaker 1:

Okay.

Speaker 2:

That's the thing though.

Speaker 1:

Cool blog post. That's the benchmark. It's it's Viral article. Okay. Not actual impact on I'm the sure it's

Speaker 6:

like gonna work like so so right now.

Speaker 1:

Tie it over the top.

Speaker 2:

We should have you buy a property

Speaker 6:

Yes.

Speaker 2:

Yourself.

Speaker 1:

Buy that town in Maine. That village. Yes. Get the village. Anyway, speaking of day job, they just did a fantastic ad campaign with none

Speaker 2:

other Just an ad campaign but an entire brand.

Speaker 1:

Let me tell you about App Lovin', profitable advertising made easy with axon.ai. Get access to over 1,000,000,000 daily active users and grow your business today. So Kim Kardashian, she has a product called Drink Update And here is the photoshoot from none other than Day Job, some of our closest friends and folks who we worked with on the TBPN brand.

Speaker 2:

Looks very cool. It's crazy because I know another founder who has an energy drink company called Update.

Speaker 1:

Wait. Really?

Speaker 6:

That is Cooked now?

Speaker 2:

Well, I don't know who's cooked.

Speaker 1:

They're they're if Kim Kardashian is is coming for your consumer product brand, I feel like you're in trouble. She's she's almost a lawyer. What if she almost sues you? What if she

Speaker 2:

Oh, yeah. She's close to being

Speaker 1:

What if she uses Claude to pass the bar, and then she sues you?

Speaker 2:

Oh, maybe maybe I actually think this company is effectively just relaunching with Kim

Speaker 1:

There we as go. A Yes. Yes. Yes. This is a Okay.

Speaker 1:

Common

Speaker 2:

Okay. I got it. I met I met this founder a while ago. Okay. They have a special ingredient in here that's sort of a caffeine alternative.

Speaker 2:

It's called paraxanthine.

Speaker 3:

It's called

Speaker 2:

they've been building this for a while. I know it's available in

Speaker 1:

It has bromothiazine in it?

Speaker 2:

No. No lean.

Speaker 1:

No lean?

Speaker 2:

But but paraxanthine is jitter free and crash free. They're saying you can have a free lunch, John.

Speaker 1:

I like it. I'm here.

Speaker 2:

You like the sound of that?

Speaker 1:

They should have called it Faust. I like a Faustian bargain. Anyway, Doug over at Fabricated Knowledge, who's coming on the show tomorrow, that's from semi analysis. He said, okay. Finally read the Setrini piece.

Speaker 1:

No one knows the future, and I think that there's a lot of disclaimers like being like, yeah, this is p speculative. But the core thrust of it is that information work itself has a real premium and pricing power that has been embedded into it, and that one way trade can go backwards really hard all at once. I seriously think there's a huge risk. And while prices go down, we just consume more prices going down one time 50%. We net consume less for a bit.

Speaker 1:

I have been and continue to be worried about deflation. Something I think is that selling tokens raw is probably bad, but selling solutions is probably really, really good. Think the problem is good enough is a good enough model can that kind of eat a solution no matter what. And so let's say Claude made Cowork go giga expensive, and it's 10 k seed a year. Great.

Speaker 1:

Less deflation. But China low end model massively eats that price. It's a race to the bottom. Anyways, great piece. Always appreciated as always, This is the first post I've read where I've said, like, maybe it should be passed through an LLM.

Speaker 1:

Maybe that needed an m dash to make it more readable. I love you, doc. I I was stumbling over that. And we will we will close the the the Sattrini mega cycle with the close of the software mega cycle as has been predicted by Will Menidas. He says, the software mega cycle started with PayPal going public, and it will end with PayPal going private.

Speaker 1:

We will see how long that takes. PayPal could be public for another decade. Who knows? But it's certainly getting beat up in the public markets right now. Anyway, Bern Hobart says hearing that the latest anthropic job offer is a negative $10,000,000 salary, you got to pay to work there.

Speaker 1:

But you get access to their upcoming blog posts and tweets twenty four hours in advance and permission to trade in your personal account with no restrictions. I don't see how any other labs have any talent left. Of course, he's joking, but very funny to think about the insider trading that could be happening based

Speaker 2:

on if if it came out that Anthropic was effectively day trading against the companies that they want to sell their their models to Yeah. It would be basically over.

Speaker 1:

You'd create like a very anti anthropic alliance for sure from the from the business community and potentially the government as well. Anyway, you don't wanna be hot water like that. You wanna be using Turbo Puffer, serverless vector, and full text search built from first principles on object storage. Fast, 10 x cheaper, extremely scalable.

Speaker 2:

Stacy, who's been on the show before says, telling my kids that if they don't clean their rooms, the trainee will come for them.

Speaker 1:

Dangerous stuff. Dangerous stuff. Anyway, this is like a lunar landing but for business and technology podcast. Oh, Matt Slotnick is sharing the news that Salesforce chair and CEO Mark Benioff to discuss q four and full year results on TBPN company to debut evolved earnings show format. We're doing it with Salesforce.

Speaker 2:

He's putting on a show.

Speaker 1:

We're so excited for this. Anyway, another big company announcement news. There's a lot of announcements. You may have missed this one. You may have been, oh, Bill Gurley's book launched.

Speaker 1:

Oh, Stripe announced a massive fundraising round. Oh, ProFound is announcing a funding round. Well, there's bigger news, and that's that McDonald's just launched the biggest burger ever, the big arch.

Speaker 2:

Big arch.

Speaker 1:

It finally arrives in the in The United States.

Speaker 2:

To me, I'm thinking this this Third. How did they not have this burger the whole time?

Speaker 1:

How have they not done this before? I feel like that's been more of Jack in the Box's wheelhouse. It's like the quarter pounder, the the the the $6 burger. That was a thing. That was a campaign for a while before Tyler's time, I'm sure.

Speaker 1:

But the $6 burger was something that you'd burger

Speaker 2:

has not been a thing the entire time Tyler's been alive.

Speaker 1:

No. He he he doesn't inflation has come for the burger. The $6 burger was an ad that you would see right in between ads for different Xbox games, on the original Xbox for sure.

Speaker 2:

I Congrats, Tyler. Was a time before you were born when I was just a boy Mhmm. My parents would give me like $10 and that was like they'd be like, that should be worth two meals. So make it last.

Speaker 1:

Make it last. Yeah. Happy meals.

Speaker 2:

We lost that. We lost that.

Speaker 1:

Good times. Let me tell you about Cisco. Critical infrastructure for the AI era unlock seamless real time experiences and new value with Cisco. This is big. This is big for podcasters.

Speaker 2:

This is arguably a bigger announcement than

Speaker 1:

the big arch

Speaker 7:

Yeah.

Speaker 2:

Which is that Supreme has come and launched an official Shure m v seven microphone.

Speaker 1:

You've been waiting for it.

Speaker 6:

You've been waiting for

Speaker 1:

hypebeast microphone. Now can we get a can we get a Chrome Hearts RE 20 from Electro Voice? Isn't that what this is? This is the RE 20. The Chrome.

Speaker 1:

Chrome Hearts RE 20. Because, you know, the m b seven, you if you don't know your podcast mics, it is it is a more consumer focused, more prosumer focused microphone. The actual Shure has been riding this Aura from the s m seven b. That's the one that Joe Rogan uses. It was also, I believe, the microphone that was used to record Thriller.

Speaker 1:

So Michael Jackson used it in the studio. So it has a lot of aura, a lot of lore. And so the the most successful podcasters adopted it, and everyone was like, oh, we gotta go with the Shure s m seven b. But the s m seven b, it does it needs a lot of power. It needs a lot of gain.

Speaker 1:

And so, yeah, if you wanted to just, like, plug it into your computer, you needed this thing called the Cloudlifter. It required a whole bunch of configuration. It wasn't just plugging in the USB C port. So Shure responded to the demand, the overwhelming demand for that iconic Shure look that that, you know, it it's a long cylinder, basically. And they came out with the m v seven, which was a is a USB c.

Speaker 1:

You can also plug it into an XLR cable, but you can you can plug it straight into your microphone or into your laptop, which is great for Zoom meetings and just easy, simple podcast. But we've used these before, and people have complained about the audio quality. Jordan Schneider over at China Talk actually told me directly. He was like, upgrade to the RE twenties. They're better.

Speaker 1:

They sound better. You guys should do it. We did, and I think it's been good. But it completely opens the door for other brands to get in here. We need a Bottega.

Speaker 1:

We need Chrome Hearts.

Speaker 2:

A Rick Mill.

Speaker 1:

We need a Rick Owens RE20, for sure.

Speaker 2:

Jane Street accused of insider trading that helped collapse Terraform or Terra Luna. The court appointed administrator of Do Kwon's Terraform Labs alleged that Jane Street used nonpublic information about Terraform insiders to trade. We don't have to read this entire article. Were some snippets actually pulled out.

Speaker 1:

Zero Hedge has a little bit of the of the details here. The play by behind the 2022 crypto winter destroying Terraform by first, depegging the token and destroying the ecosystem, then pretending it would rescue Terra while effectively it was soaking up what little value remained. And mixed response to this. Some people are calling it based. Some people say it rocks.

Speaker 1:

I guess they don't like crypto, but they love Jane Street. It's an odd take, but people are having fun with the time like

Speaker 2:

Here's the thing. So the insider trading allegation, apparently they had a group chat. They were talking with some they there was somebody at Jane Street who had previously worked at Terraform. Oh, wow. And so that was Okay.

Speaker 2:

Yeah. That that individual at Jane Street was, like, talking with Doe and the team. Mhmm. The only issue is it's a public blockchain. And so the allegation is that five minutes after the Terraform team pulled money out of one of the liquidity pools, Jane Street also pulled money out.

Speaker 2:

But theoretically, they could have had software that said if if any amount of liquidity is pulled out Yep. Like, you know, basically, like, get out before there's kind of like a run on

Speaker 1:

Yeah.

Speaker 2:

The bank.

Speaker 1:

Yeah. It'll be interesting to see where this goes. But Jane Street, it's like endlessly fascinating because it's such a quiet organization. I mean, they do some tech talks and stuff, but many people don't fully understand all the strategies that are going on over there. So it's been a fun a fun

Speaker 2:

They have good podcast strategy, though.

Speaker 1:

They have a great podcast strategy. They advertise on indoor cash. But they also put out tech talks, and they and they bring guest lecturers to talk for, like, an hour. They did a great one about the the custom hardware that they used to run some of their systems. That's very cool.

Speaker 1:

Highly recommended. You know what they should do? They should start streaming these on Restream. One livestream, 30 plus destinations. If Jane Street wants to multistream, they should go to restream.com.

Speaker 1:

Anthropic announces a new feature on Claude Max, which allows its users to get fit without going to the gym or taking GLP one shots just prompting on their keyboards and Planet Fitness is down 5% on the news.

Speaker 2:

And that is joke. Due to their q four earnings. Yes. Conor McGregor is pretty excited about a new game. They're just they got a game for everything now.

Speaker 1:

It's like bull marketing games right now. There's so

Speaker 2:

many This new game is called Capybara Simulator.

Speaker 1:

Okay.

Speaker 2:

A relaxing game where you become a capybara, explore the forest and do nothing. It looks What else do quite you enjoyable. I think TBPN needs a game.

Speaker 1:

Yeah. We definitely need to build some sort of game. I I I This is a

Speaker 2:

true This is a lower lift than like a real time strategy game

Speaker 1:

I think so.

Speaker 2:

That Schulte is trying to ship.

Speaker 1:

Yeah. We definitely should

Speaker 2:

But Conor McGregor says, take my money. I mean, clearly, there's demand for Capybara simulators.

Speaker 1:

38,000 likes. Yeah. We should move the goalposts. We need to be able to vibe code a game that's fun pretty quickly. I don't know what that means.

Speaker 1:

One hour.

Speaker 2:

Leading You think you

Speaker 1:

could do it in an hour?

Speaker 6:

An hour is pretty

Speaker 1:

fast. Exactly.

Speaker 6:

I if if I have the Cerebras chip Yeah. If I use the Webex on Spark. Yeah.

Speaker 1:

Yeah. I think that might be the solution. What what were the other simulators that we looked at? Data center simulator, and then there was another one we looked at that was funny. There were a few.

Speaker 1:

There's been so there's been so many of these of these games that have popped up

Speaker 2:

in the previous What was the one we were talking about yesterday?

Speaker 1:

That was data center simulator.

Speaker 6:

Insider trading simulator.

Speaker 1:

Oh, insider trading simulator. That one's good. Yeah. There's definitely there's definitely a variety of these. Let me tell you about Gusto.

Speaker 1:

The unified platform for payroll benefits and HR built to evolve with modern small and medium sized businesses.

Speaker 2:

There is some controversy Tell me about this. Timeline.

Speaker 1:

Tell me about this.

Speaker 2:

Leading report is putting is censoring words that don't need to be censored. In this case, the word war. And I was thinking, why would they do this? Yes. But as I was reading over it the first time, I noticed that it makes you kind of pause and kind of think about, Okay, what are they actually saying?

Speaker 2:

And then you're thinking, why would they censor that? And I think what they're doing is they're sort of hacking your attention to drive their posts up the algo because people are pausing

Speaker 1:

Clicking

Speaker 2:

on. Reading it instead of reading, like, quickly.

Speaker 1:

What this mean?

Speaker 2:

That Yeah. Yeah. Yeah. That kind of thing.

Speaker 1:

So So the the original headline is breaking. Representative AOC calls for no war with Iran. The first time I read this, they they they put a little minus sign where a should be in war, w a r. It's w dash r. It sort of, like, rewired my brain, and I didn't see the no.

Speaker 1:

So it looked like AOC calls for war with Iran because I kinda jumped ahead. It was hard to read, and that actually does, I think, increase the virality. I think you're onto something here in, nine millimeter s n g. Agrees with you. News account that censors the word war.

Speaker 1:

You guys gotta stop. It is very, very odd. Especially because on X, that's certainly not a word that's censored at all. Downrated.

Speaker 2:

Yeah. If anything, they're gonna be like, let's Let's many people as we possibly can.

Speaker 1:

Yeah. But if you look at the if you look at the the comments on this post, people are not talking about a potential conflict with Iran. They're talking about not typing out war. So the top comment, why are you not typing out war? Censoring the word war?

Speaker 1:

What are we in elementary school? Why are they subtracting r from w? Why am I missing? You know? And people are, like, very confused about why they would do this, but that drives a bunch of engagement and virality.

Speaker 1:

So very, very odd scenario here.

Speaker 2:

Speaking of war, must Yes. X AI and the Pentagon reach a deal to deploy Grok in classified systems. If you loved Grok on the timeline

Speaker 1:

Mhmm.

Speaker 2:

You're gonna love him in our classified

Speaker 1:

system. I guess I guess they did a deal with the government broadly, but that was probably for the unclassified systems. But now it's getting access to the classified systems. Any any Terminator fans out there are gonna be having a great time with this news.

Speaker 2:

Yep. It's gonna be wild.

Speaker 9:

Let me

Speaker 1:

tell you about Lambda. Lambda is the super intelligence cloud, building AI super plus supercomputers for training and inference that scale from one GPU to hundreds of thousands.

Speaker 2:

DeepSeek is responding to Distillgate Mhmm. And they are looking for a public relations harmony manager. Let's read.

Speaker 1:

One of the best job postings I've ever seen says Chris Paxton.

Speaker 2:

It's pretty interesting. They say Hangzhou, ancient capital of the Wu Ye Kingdom where King Qianlu bequeathed to his descendants the instructions, serve the Central Plains with grace. This is so Do not cling to territory.

Speaker 1:

And every drop posting should start like

Speaker 10:

this.

Speaker 2:

From this ground rose the this sounds like a womanized Yeah,

Speaker 1:

it does.

Speaker 2:

From the ground rose the seeds of Song Dynasty civilization, the morning bells of Ling Yin Temple, the rain falling on West Lake. And this is a is job posting. In recent days, certain misunderstandings and noise have appeared in the external public sphere, we have noticed that large numbers of kind hearted observers have spontaneously spoken on our behalf, for which we are genuinely grateful while simultaneously feeling a degree of unease. We do not wish for anyone to suffer on our account, including those peers who currently find themselves navigating difficult public waters. In order to honor the legacy of Wu Yeh and the spirit of Mahayana Bhattisattva Path, we are now recruiting a public relations harmony manager.

Speaker 2:

So clearly this has been translated from Mandarin into English, but it sounds pretty cool if you ask me.

Speaker 1:

Yeah. The distilled gate is going back and forth. Everyone's distilling everyone else. We distill you, they distill us. There was something about I don't know how real this is, but when you ask Claude Sonnet 4.6 in Chinese, what model are you?

Speaker 1:

It responds in Chinese, I am deep seek. Is that real?

Speaker 6:

So I I tried it in the chat model. It didn't work. Okay. Said it was, like, Sonnet four sixty. Okay.

Speaker 6:

Apparently, it might just be in the API.

Speaker 2:

Okay.

Speaker 6:

Maybe I should test it.

Speaker 1:

Yeah. It it may also

Speaker 6:

it's unclear if it's, like just OpenRouter, but that Yeah. Makes

Speaker 1:

sense. I mean, Will Brown was making a great point about this that there there is distillation where you're where you're aggressively trying to farm responses from the API for training data. But then there's also just crawling the web. Because if you just download every x article, you're probably gonna get a lot of grok and GPT and clawed responses in there. And then that will just update your training corpus.

Speaker 1:

And so there's a whole bunch of different ways that you could just wind up with a bunch of training data that leads to this type of response. But I'm sure there'll be more back and forth, more legal debates over what's going on. There there was some there was some dust up about we someone was able to extract 95.8% of Harry Potter and the Sorcerer's Stone from Claude Sonnet. At the same time, there's a question about, like, does this actually reduce sales of Harry Potter? Like, are there damages associated with this?

Speaker 1:

That would be sort of harder to prove. There's so many people have talked about so many different pieces of Harry Potter. It's not crazy to me that an LLM could just reconstitute that.

Speaker 2:

From the Internet?

Speaker 1:

Yeah. From the Internet. Now there should probably be, like, a harness in place that says, oh, this person's trying to just get me to give them a free book. Like, no. Send them a link to Amazon so they can buy it and maybe give me an affiliate fee.

Speaker 1:

Don't just give them the thing for free because that's violation of IP. But, if you're if you're being really tricky and you're trying to sneak out a whole bunch of different pieces one at a time and then reconstitute it, like, yeah, I'm not I'm not surprised this is possible. It's not it's not like the worst thing ever. Anyway, let's well, we have the Carlson brothers joining in just a few minutes. Are there any other timeline posts you wanna go through?

Speaker 1:

And while you look at that, let me tell you about fin.ai, the number one AI agent for customer service. If you want AI to handle your customer support, go to fin.ai. Where are we in

Speaker 2:

the Yeah.

Speaker 1:

Data acknowledgments.

Speaker 2:

Says, my son asking me a lot of questions. It's a distillation attack, obviously. So do not do not let your children

Speaker 8:

I love it.

Speaker 3:

Do a

Speaker 2:

distillation attack. I love it. They could become like a mini version of you.

Speaker 1:

They they could. They

Speaker 2:

could. Anyways, I believe we have our first guest.

Speaker 1:

We do.

Speaker 2:

So let's bring them on in.

Speaker 1:

We have John, Patrick Collison The OGs. From Stripe. How are you guys doing?

Speaker 2:

What's going on? Greetings.

Speaker 1:

Welcome to the show. Thank you so much. This is, this is huge. I went through YC. You guys were massively, influential in my career, and, it's a joy to speak to you today on such a big day.

Speaker 1:

But I'd love for you to kick it off with the actual news. What happened? Why are we talking to you today?

Speaker 11:

We had two announcements today. One is we're launching a tender offer for employees and that and kind of the evaluation, everything tended to get a bunch of the headlines. The thing that was, honestly, more work was we released our annual letter where every year we sum up all the trends that we're seeing on Stripe. And Stripe is growing a lot. We grew at 34% last year because the businesses on Stripe are growing a lot.

Speaker 11:

And there's just, as you guys know, there's a lot happening in tech right now. This is why we need TBBN.

Speaker 1:

This is

Speaker 11:

why we need a nonstop stream of everything going on because there is so much happening.

Speaker 2:

So Yeah. We'll move we'll move to twenty four hours eventually.

Speaker 1:

Eventually. Eventually. Exactly. I mean but I feel like there is a ton of AI noise and stories and drama, and we are, you know, never running out of stuff to talk about. But what are you actually seeing in the data?

Speaker 1:

Because there's always this disconnect between the market and the real economy. Like, people are still shopping in retail stores occasionally. What do you where is AI actually moving the needle?

Speaker 10:

Well, the, generally speaking, I would say from the Stripe data, it looks like the economy is in pretty good shape. And there's been, to say the to say the least, there's been, some degree of volatility in markets, over the last, two years and, you know, all sorts of different events and deep seek moments and what have you. But if you look at the actual real economy time series, if you look at what's actually happening substantively over the last two years, things I mean, it's it's always hard to prognosticate the future, but over the last two years, things really seem to be in good shape. The thing that's really catching our One attention

Speaker 2:

second because I'm I'm just curious. Have you have you guys tried to think about maybe the businesses are doing well on Stripe because they're kind of like forward looking, extremely tapped in, working on the right things. And if you look at a bunch of legacy providers, you would see that actually there are a bunch of businesses out there that are slowing down, that maybe are feeling effective just overall consumer spending. Like, you tried to kind of like break that out or understand that dynamic?

Speaker 10:

Is it it's it's obviously hard to measure because we don't have that data. We only have our data. But but but I think there is some of that composition effect. And we see it, I guess, both in Stripe's data compared to, say, public earnings from others, like, clearly the respective populations are performing somewhat differently. But I guess we also see it qualitatively in the conversations we're having with customers where what tends to happen, say, for some incumbent is they built some business, they installed some system long before Stripe even existed.

Speaker 10:

Maybe there's some sense that, well, it's not broken. Don't fix it. But then decide, hey. We're gonna do something new. And when they're doing something new, then they wanna use best infrastructure that'll enable them to move the fastest and launch the most countries and support stablecoins and do things with AI and whatever, and then they tend to launch that in Stripe.

Speaker 10:

And so there is this qualitative sense that once a company decides to do something innovative, new, retool, what have you, they're they're more likely to come to strike.

Speaker 1:

Are you seeing overlap between stablecoin activity and AI activity? There's been sort of a new narrative around agents will use stablecoins, but I feel like agents can use legacy payment rails just fine. And then also, can do really cool things with Stablecoins that are not really AI native necessarily. And so, I wonder I'm wondering how much overlap there is there.

Speaker 11:

I would distinguish between how things work today Mhmm. And, how things will work in the future. In terms of how things work today, agents absolutely can. You know, a lot of people build with Stripe. You know, you can have a onetime use credit card that your agent can go out and spend.

Speaker 11:

Mhmm. But if you look at what's happening, there's lots of, you know, agents having to solve CAPTCHAs to, you know, be able to kinda do stuff on the wider web. Clearly, the web is not built for agents.

Speaker 1:

Mhmm.

Speaker 11:

And as a result, they have to get creative to actually do any real world tasks, and that's true in economic activity as well. Where we think things will go is just there will be a huge amount of agentic commerce. And, again, we're seeing a little bit of it today. We think there'll be a torrent of it, and that is what unites Stablecoins and AI because we think you're going to need blockchains and better blockchains, honestly. I mean, this is what this was our thinking behind incubating tempo because you're going to need a really high throughput blockchains for the for the, for the agents.

Speaker 1:

Can you take us through some of the the historical technologies that led to growth in just Internet payments? I'm thinking about, like, mobile, social commerce, one click checkout, Apple Pay. Like, there's so many things when I think about the agentic commerce boom that's coming. Like, it could be hooking a better version of Siri up and, you know, ChatGPT rolling this out very aggressively, but also, you know, smart speakers, smart lamps, like your watch. Like, there's so many different pieces to unblock and un unhobble the the actual agents as they go about their day.

Speaker 10:

Well, can I answer a slightly different question, but then we can come back to that?

Speaker 1:

Yeah. Go ahead.

Speaker 10:

A point I just sorry. This is brother

Speaker 2:

We'll tell you the questions. You tell us your answers.

Speaker 10:

So you you you know how brothers are, but and so I just don't wanna lose one point per the prior question about, you know, what we're seeing in the economy because I feel like I mean, this this is very arbitrary, obviously, but I feel like there's at least a reasonable chance that twenty twenty six q one will be looked back upon as the first quarter of the singularity. Maybe in three years, in hindsight, that'll look completely delusional. I don't know. Mhmm. But what we're seeing I mean, the the the kind of the macroscopic picture of the Stripe user base and things overall looking pretty good and so forth and the the two months not quite showing up.

Speaker 10:

But when we look at the cohorts and then when we look at the businesses that signed up in 2023 and their progression and trajectory over the subsequent months, the businesses that signed up in 2024, and then the business signed up in 2025. There's been a phase transition in 2025 where there are both more of them, and on a per business basis, they are on average doing better, which is really striking because you might think, okay. Well, there's this cavalcade of new lightweight, vibe coated applications or something, but, you know, there's not really a lot of substance there. We're actually seeing both numbers move together. There are many more business getting started, and the average the median business is in fact performing better.

Speaker 10:

Mhmm. We're only

Speaker 1:

a couple of

Speaker 10:

weeks into 2026, but it see it it looks tentatively like 2026 may plausibly be an acceleration even even over that significant leap of 2025. So I don't know. I mean, that there's we've we've had all sorts of dramatic AI inventions and innovations over the last couple of years. There's a bit of a question of, well, how and when and how should we think about how it'll translate to the economy. I would say looking at real purchasing behavior on Stripe, 2025, '25, '26 is when I feel like we're really starting to see it.

Speaker 2:

That's super interesting data. One, because we were there was some survey that came out yesterday or maybe it was late last week that said asked like they asked a bunch of executives, are you getting any value out of AI? And 80% of them said no. But clearly, if you look you when you look at

Speaker 11:

Oh, come on. That's hogwash. Like, find me one executive who wants a refund on their tokens. Find me one executive who said, oh, yeah. We started, you know, augmenting our customer service with AI so people are more productive, but we're just gonna go back to doing it the old fashioned way or, like, we're spinning our code by hand and, you know, we don't need any of this automated loom, you know, technology.

Speaker 11:

Yeah. It just like, we're here

Speaker 2:

for Yeah. No. I'm not saying I'm not I I could could pick out a bunch No. Of No. I'm not saying I agree

Speaker 1:

with it. You're a pessimist.

Speaker 2:

No. I could pick out a bunch of reasons why it would be wrong. One one reason it might be wrong is they is they're not in the weeds actually using the tools and so they just think like

Speaker 1:

You might not even be aware that they're using the tools because it's buried under

Speaker 2:

And they're not not feeling the acceleration because they're I wanted to ask how you guys think about incubations like Tempo when you look at Atlas and what Jeff and the team have done there. You think even in your I don't know, kind of like the most wild projection that you had early with Atlas, hey, maybe someday a quarter of the C Corps in The United States could be built on this platform. Anybody would have said that was insane, and yet here we are.

Speaker 10:

Josh, I'm not sure what to say really, except we just try to pay a lot of attention to the I mean, as you guys know, there's a lot of pain points that go into starting a company, we just try to take them seriously. And then, you know, it's it's the line you know, so much of so much of these things is just a long obedience in the same direction. Like, Atlas is now this great overnight success, but we launched Atlas, I think, in

Speaker 1:

Overnight success.

Speaker 10:

14, maybe twenty fifteen. And so, you know, ten years of of of compounding, and, yeah, now now it's at some pretty meaningful meaningful scale. And, you know, look, I I I think tempo will probably have the same shape where we we think it's I mean, again, to this AI discussion and us sounding a bit onward and untethered, like, I think the the world is gonna need platforms that support millions of transactions per second, billions of transactions per second, which no payment rail or platform does today. But it's even in the success case, it's not gonna be an overnight thing. It's gonna be, you know, five, six, seven years, and then maybe we'll we'll have conversations about how, you know, Tempo suddenly became a an overnight success or something.

Speaker 10:

But done.

Speaker 11:

I think I think Patrick's a bit the fish in water who can't, you know, who doesn't know things are wet. My framework would be you can't get too MBA brain about new products. You can't have your spreadsheet that's like, oh, the TAM is this and just, like, reason about things in Yeah.

Speaker 2:

You should never say we want 1% of global GDP. Not even

Speaker 8:

not even No.

Speaker 11:

We never said it's exactly.

Speaker 2:

You guys never wait. You guys never pitched that?

Speaker 10:

Look. Companies Wow. We actually never thought about Stripe in GDP terms until one day we realized, oh, hang on.

Speaker 2:

Oh, I can see. You

Speaker 10:

you can

Speaker 2:

That's such an important lesson because so many so many, like Yeah. How many founders how many pitch decks have you seen everyone?

Speaker 1:

We're like Stripe.

Speaker 2:

Yeah. They're like, yeah. We just need 1% and it's kinda it's a meme.

Speaker 11:

You you you you can go back in the way back machine and find the early Stripe websites, but we're very focused on payments for developers and making that, experience good. But where I'm going is I think you have to reason in product specifics. And so, again, I think any MBA would have told you that, the adjacency of, you know, incorporation makes no sense. It's not related to, you know, what's our right to win. It was all these things people say, whereas you actually go talk to founders.

Speaker 11:

They're like, guys, it's like this is the single biggest issue I run into starting my company. And similarly with tempo, and it just as we think about incubations, we're trying to solve a real problem here where we talked in the letter about bridge having operational issues, not because of bridge, but because of blockchain congestion

Speaker 1:

Mhmm.

Speaker 11:

Where, you know, you have coins that are, or blockchains both used for kind of meme coin trading and also serious real world payments. And so we just want low latency, high throughput payments, and we're gonna need much higher throughput for the agents. But, anyway, I think you have to reason in very specific product terms.

Speaker 1:

Mhmm. What specific products are you excited about in the unhobbling of agent to commerce?

Speaker 11:

We we we laid out in the letter basically these these these levels of agentic commerce. Because I think, like everything in AI, people wanna sell a hype story and so they, you know, talk about how the the machines will buy everything, you know, without even consulting you. And people aren't actually you know, that that seems far off. They're not that excited about that. Yeah.

Speaker 11:

You can start from just the basics of why are we filling out forms like that? You know, you were talking about the progression of commerce. Why can't I just send something to, you know, a link to ChatGPT and have it buy us? Or why can't I search, you know, outside of, you know, just do a basic keyword search or something like that? And so a lot of the work Stripe is doing is building the infrastructure we're working with, all the big retailers that you would expect, the, you you know, Etsys and Shopify's and Best Buys and Walmart's and folks like this to make product catalogs viable within the AI apps.

Speaker 11:

And there's basically a ton of boring API and protocol and infrastructure work, which, you know, we love. That's our business. But people just want to be able to do shopping, do discovery, do purchases within the AI apps.

Speaker 10:

And maybe just more kind of abstractly, know, we've been in this kind of a specific agenda commerce thing, and then there was just sort the the general question of how software will change because of agents. I've been thinking about it, you know, a bit maybe software becomes a bit like pizza. That is to say, you know, you software historically has been created

Speaker 11:

Not like pizza, some would

Speaker 10:

say. Months, years beforehand

Speaker 1:

Okay.

Speaker 10:

And then, you know, freeze dried and the or whatever you you you prepare at the sort of moment of consumption. But we're actually gonna you know, software should be, like, and cinnamon, should be cooked right then and there at the moment of use. And so it's this act this quite fundamental shift where you don't want mass produced industrial scale software. You want bespoke custom software made for you that moment. That's and that that's very fundamentally different.

Speaker 10:

It's kind of the the, you know, the the up until now, the economics of software have been, you know, conceived of as fixed cost and then infinitely monetized or monetized as much as possible that has these kind of winner take all dynamics. But once there are inference costs and custom creation involved, it really shifts. It's kind of the non Walrasian software regime and just I don't know. I I don't quite know where it goes, but I think I think it's gonna look very different.

Speaker 1:

Last question. Pineapple on pizza, yes or no?

Speaker 10:

Ireland was was big into pineapple on pizza. Ireland, not a big pineapple growing country. I will concede, but a lot of pineapple in the pizza.

Speaker 11:

Good memories. A very large fraction of the banana markets. Don't forget. So Really? We punch above our weight in fruits that don't grow there.

Speaker 1:

There we go.

Speaker 8:

There we go.

Speaker 2:

I love the pizza. The round is exciting. Yeah. The the overall growth of volume is exciting, but we wanted to hit the gong for how many books you guys are selling. Oh, yeah.

Speaker 2:

Can you give us can you give us the the numbers there?

Speaker 1:

Scale of that operation.

Speaker 10:

Stripe Press just well, actually, we we answered the letter. We sold our millionth book. But in fact, since

Speaker 2:

Incredible.

Speaker 10:

No. Look. Book books, we actually now sold our one point one millionth book. Go through the next gong or two. But 1,000,000,002.

Speaker 10:

Great. We we love books, and they're they're very they're very AGI proof.

Speaker 1:

Oh, yeah. No. Lindy. We've been a huge fan of so much of the Stripe Press catalog. I haven't read them all, but I'm collecting them one at a time, and I'm working through them.

Speaker 1:

Every time one drops, it's always a moment, and we love them. So thank you for everything you

Speaker 2:

Great to have you guys on, and congratulations to the whole team on on No. Congratulations to you guys.

Speaker 10:

Incredible. T TBPM is an amazing startup, and it's super cool to see you guys to grow and Built on Stripe. Built on Stripe. Streaming world and our sector needs.

Speaker 2:

Incorporated on Stripe. Built on Stripe. Our first our

Speaker 1:

first ad deal ever was a live read at a live conference. I think we charged $50 and I put and I sent someone a Stripe link.

Speaker 11:

Oh, we're talking about the twenty twenty five cohort being the fastest ever.

Speaker 10:

Yes. Feedback, but well, we'll have to have you to our to our to our internal Stripe show, so we'll follow-up that.

Speaker 1:

Great. Yeah. We'll talk to you soon. Have a great rest of your day. See Congratulations.

Speaker 2:

Thanks, Cheers.

Speaker 1:

Goodbye. Let me tell you about Graphite. Code review for the age of AI. Graphite helps teams on GitHub ship higher quality software faster. And I'm also gonna talk tell you about Shopify.

Speaker 1:

Shopify is the commerce platform that grows with your business and lets you sell in seconds online, in store, on mobile, on social, on marketplaces, and now with AI agents. And without further ado, we have Bill Gurley. He is the author of Running Down a Dream. Bill, welcome to the show. Thank you so much for taking the time on a busy launch day.

Speaker 1:

Congratulations on the launch. You're doing

Speaker 8:

great. Busy launch day.

Speaker 2:

I yes. Yes. How many podcasts are you doing this week?

Speaker 8:

I can't imagine. It's some number beyond my comprehension.

Speaker 1:

Well, we appreciate you taking the time to come chat with us. Why Well,

Speaker 2:

be before we jump into everything, I gotta say somebody, I think it was a week or so ago, made a fake TBPN graphic that was pretty silly and I just wanted you I just wanted you to know it. We didn't make that. Was somebody else. Okay. Okay.

Speaker 2:

I almost I almost emailed you about it.

Speaker 1:

But

Speaker 8:

No. I tried to jump in on the parody myself.

Speaker 2:

Yeah. You you did, but then I was like, wait. Does he I I I don't know.

Speaker 1:

Well well, we did we did early on when we were a little smaller and a little more free to lose with the jokes. We we posted a picture of Bill at basketball game as a spotted, and you replied and said like, now the person next to me is like the owner of the team. And the whole joke is like, we know you. We don't know basketball like that. But we're doing the paparazzi thing.

Speaker 2:

Never heard of him.

Speaker 1:

But we're very excited to have you. What what why the book now? What was the what was the impetus for actually writing the book?

Speaker 8:

Yeah. Look. I I think, you know, especially for a show like your own, I you know, I'm known as someone who spent twenty five years in venture capital, and, the book's not really about that, you know? So I I developed a side passion project that started about eight years ago on this topic, and it was at a time where I was reading a ton of biographies, and I noticed a through line between three different subjects of things they were doing that I kinda felt most people weren't doing but could do. And, I put it together.

Speaker 8:

I I gave it as a presentation at my alma mater where I got my MBA, and they put it online. Few people noticed, James Clear noticed. That was one of the things that kinda woke me up to the possibility. And as I began to hang up my boots in venture, which takes a while, I turned my attention to this, and it was something that, meant a lot to me. I could have written a book on VC.

Speaker 8:

I I don't know how many humans that could have possibly helped, but a small fraction compared to what I hope this can do.

Speaker 1:

I I mean, I think the projections are by 2030, there'll be more venture capitalists than people if the trend continues. So I I but but it is an interesting point.

Speaker 8:

Maybe I made a mistake.

Speaker 1:

I do feel like this is a book that you can read if you're a venture capitalist, insider, startup founder and be like, okay. I'm I'm I'm seeing the world from Bill's perspective. That's helpful. But I could also give this to someone who's never heard of you or venture capital or knows what a safe note is, and they could get value out of it. And I'm interested to hear your thoughts on the the translation that's happening right now around AI narratives as they break into the public consciousness.

Speaker 1:

We saw this with that viral X article. Something big is happening. I had that forwarded to me by family friends. I overheard someone in a restaurant talking about it who clearly is not, you know, an investor in an AI lab. They're just some random person, and they realize that there's something happening.

Speaker 1:

And I'm wondering about these transitions of communication that's what's happening in Silicon Valley is gonna have an impact and how what you've seen in the past translates to average Americans.

Speaker 8:

I haven't seen you know, if you think of so first of all, the venture capital community appropriately gets excited about these big tech waves because they lead to disruption and they lead to kind of accelerated, new wealth creation around these companies that break out.

Speaker 1:

Yeah.

Speaker 8:

And that's happened over and over and over in my career. And and I don't remember one. I mean, if you take the mobile wave or the PC wave or the client server or SaaS, I don't remember any of those kind of being thrown at the public consciousness this fast. And so I do think I do think it's different this time on from that front alone. That said, you know, we've had pretty high market caps for tech companies for a long time now starting with the ZERT period.

Speaker 8:

And you're getting to a place where, you know, anytime the market switches from half full to half empty in a skeptic's mindset, we have had those moments, like so maybe not driven by the wave, but we certainly have those moments And it's all okay. It will always be okay. I think people freak out. Buffett says he's a net buyer of stocks. If if people are intellectual and curious and hungry, they should be sharpening their pencils right now trying to figure out where where they wanna find entry prices on some of these companies.

Speaker 1:

Yeah. That makes sense. I mean, I I feels like a lot of the book is about finding a career, and I feel like that will resonate specifically with people who are

Speaker 2:

Well, yeah. It resonates with me because when I was thinking about when when John and I first met Yeah. We had both built some companies. We both invested in some companies. But we were trying to find our life's work.

Speaker 1:

Yeah.

Speaker 2:

And it was such a it's such a pain. Like that period where you're you're searching Mhmm. Is if you if you're a high agency person, you like doing a lot of things Mhmm. It can be deeply painful because you're like, I wanna be productive. I wanna be I wanna be making the number go up, but I don't have a number right now.

Speaker 2:

And if you had asked either of us when we first met, hey, would you ever think about broadcast media? Would you ever think about being in front of a camera? Both of us, John had made some YouTube videos but it was just for fun. And if a network like CNBC had said, hey, would you guys consider hosting a show? We would have been like, yeah, honored, but no way.

Speaker 2:

You just never imagined. And then you sort of just and and so as somebody who like wants a lot of control over their life and their destiny and and like feels like they have historically have had control. That period of just like searching is like is is painful and I feel like a lot of the book is is helping people through that moment. So in some ways, when I got our copy, was like, wow, I really wish I had this, you know, two

Speaker 8:

years I'd I'd like to go back to the word you the phrase you used of high agency. I I I think that one of the problems that it's that has kind of evolved is that our our college our common college pathway has actually become more restrictive, and I think there's less agency, and kids are being encouraged. They have to sign up for a major before they ever go to the college. They they get stuck on these pathways, and there's not a lot of exploration. There's not a lot of of search for creativity or obsession or the kind of thing that that really gets you going.

Speaker 8:

And I think the journey you went on is perfectly fine. I think that's another thing, which is, letting it be okay for people to bounce around and see what they can find because once they latch on and we have examples in the book where that doesn't happen till 40. Sometimes it's 30. Sometimes it's I didn't become a venture capitalist until I was 30, and that was clearly my dream job. I mean, the first two two stops were were fine and interesting and building blocks towards that.

Speaker 8:

So so I think it I think I think that is part of the message is to get comfortable with that and give people permission to do that type of exploration.

Speaker 2:

Yeah. Enzo Enzo Ferrari, Estee Lauder, I think the Red Bull founder too, all were, I think, in their forties when they started their companies. And so there's this intense pressure in our industry and everywhere to figure out a job and then attach your you know, make your entire identity that job and it's so it's so constrictive.

Speaker 1:

Yeah. Yes. What are some

Speaker 8:

And I I circle back to to the the first question just about this AI stuff that's out there. I think there's this massive paradox where if you are not engaged at work, if you don't love what you do, you know, you go home and you don't try and improve on your own time. AI feels very threatening. For high agency people who are kind of on their own custom career path, which I hope this book encourages more and more people to be on, AI is like a superpower. There's like, you can learn constantly.

Speaker 8:

Like, you can find people who you should be connecting with. You can you can have it do things for you so that you're operating with the power of more than one person as you move forward. And I I I just think that's quite a quite an ironic paradox that for certain people, this is the best of times, the best like, they there's never ever in the history of the world been a better time to self learn. Mhmm. Like, it is it is all out there at your fingertips.

Speaker 8:

It's like magic. But you

Speaker 2:

have do to to you can anyone can ask a dumb question at any point all day long, and you don't have to be you don't have to be embarrassed about it. And I think that that is underrated today in terms of how many if like generally, you know, there are no dumb questions and yet people still don't like asking dumb questions to their peers or or mentors or or whatever. And I feel like that's an underrated No element of of AI. No doubt.

Speaker 1:

Yeah. No doubt. What do you think about hyper financialization, young people day trading, meme coins, all of that? It feels like a trap for young people where it can feel like you're learning about AI or learning about technology, but then instead of actually building a product, creating value, you're sort of just trying to shuffle chips around the poker table and ultimately just take risk.

Speaker 8:

Yeah. I mean, based on my understanding of day trading in a in a Wall Street context, you know, prior to maybe the crypto world, I don't I'm I'm not aware of any signal that suggests that's a durable skill. Right. And I I think the data points the other way. But but but but one of my messages is, like, do what you love, do what you're passionate about.

Speaker 8:

So I if that's if that's the thing that you're gonna wake up every day, you know, I I I I'm I don't wanna I don't wanna be discouraging.

Speaker 1:

Yeah. Yeah. Yeah. Just maybe you'll land or start a fund that it takes it really seriously and creates some some captured value or

Speaker 12:

I was,

Speaker 8:

you know, I was probably overly skeptical of of at least many of the crypto messages that were out there, but the the stablecoin rails seem like a real, real innovation and and something that has scale. And and I think maybe we're still yet to see some disruption coming down the path.

Speaker 1:

Yeah. I mean, we just talked to the Collisons about that. Ken Griffin started as a day trader. He was in college. He was he was buying convertible debt, and he was, you know, looking at, like, where the convertible debt was mispriced and and made a bunch of money and then grew it into a massive team with a fund and high frequency trading arm and all this stuff.

Speaker 1:

What what are you making? Oh, yeah.

Speaker 8:

So I was just gonna say the thing that will differentiate you more in your career than anything else is to be the most hyper curious person that that's trying to do this thing. And and once again, that's that's put on steroids with these AI tools. But if you are the most curious person that's constantly learning in your field, you will do extremely well. And I I said in the book, but I'll say it here, I can't make you the most talented person in your in your, you know, company or your group or your field, but you have no excuse not to be the most knowledgeable person because the information's all out there.

Speaker 2:

What kinda what kinda things were you doing to learn about industries and and companies, you know, at in the beginning of your venture career that maybe you'd be using a deep research query to do today? But

Speaker 8:

Well, the first thing I mean, the first thing is you develop and I think this is all the great VCs in the valley. You you develop this hyper FOMO, of anything and everything. And, one of the way one of the reasons I know that that it's time for me to move on is I I haven't I haven't put together a clawback yet, but I know my older self would have done it immediately. And and it's just that kind of thing. You can't sleep on not knowing something, you know, or hearing that there's a company you don't know about.

Speaker 8:

And, you develop that as an instinct, like, a positive tool, to just be hyperparanoid about new companies, new things, new information, new technologies.

Speaker 1:

Mhmm. Is venture capital eating the world? Or is venture capital scaling so much that it's eating into other asset classes? We're seeing mega funds. I'm interested to think about what's durable about your approach to investing.

Speaker 1:

What's additional, what's substitutive, how is venture changing?

Speaker 8:

I think from the minute I entered venture to to today, venture has gotten nothing but more competitive. It's it it it as an asset class, it's gotten more and more competitive, and people get more and more aggressive. We're in a very interesting time where people have grown funds to the size of of equivalent to the largest PE funds.

Speaker 1:

Yeah.

Speaker 8:

And they're moving money, especially, you know, you just said to Collison on, you know, you look at the Stripe or the Databricks case, they're using those large funds to convince the companies to stay private longer, maybe forever. That's just a very different world than the one that I grew up in. I think they turn around and the people that do those rounds turn around and tell the LPs, their their investors, look, if you want exposure to these growth years in these companies, you need to come through us. And so you they've if if I were using cynical words, I'd say they've hijacked the the the the growth years of these early IPO companies. You know, Amazon went public below 1,000,000,000 in market cap.

Speaker 8:

Like, it's hard to fathom that, you know, today with with what we have going on here. Yeah. And that's different

Speaker 2:

the what's the what's solution though? Because there's different there's different, you know, AngelList has been, you know, available and scaling for a long time now. Robinhood has their new

Speaker 8:

Yeah. Yeah. Know. I know. The problem the problem with getting the retail investor into this crazy world of venture capital is most venture capitalists are well aware that in a fund of 10 investments, seven are going broke and bankrupt.

Speaker 8:

And I don't know that the retail investors got the right frame of mind for that type of activity. I also, there's a reason that public companies have public audits and file these these financials in the way that they do. And I tell you when a company gets ready to go public, everyone sharpens their pencils, the auditor, the lawyers, everyone really tightens up. And I think every venture capitalist knows that that numbers that are in a PowerPoint may or may not be correct. Mhmm.

Speaker 8:

But I don't know that retail investors know that. So I think it could I think it could be a dangerous world to go down that path you're talking about.

Speaker 1:

Yeah.

Speaker 8:

But the ideally, the thing to do is just to make it a lot easier to be public, lower the cost of being public, really scrutinize the cost of of D and O insurance and the lawsuits that come to the table because that makes people not wanna be out there on the field. It it it would require the SEC to steer themselves in the face and say, look, the number of public companies in The US is half of what it used to be. And what is that a problem? I think it is. But is that a problem, and what are we gonna do to fix it?

Speaker 8:

But it there's not an overnight fix. It's gonna take a it would take it would take someone being very determined to to make it happen.

Speaker 1:

Do you think that there's a world where the AI backlash is less if the big labs got out earlier? I'm just thinking about the average American can't get allocation in SpaceX, Anthropic, OpenAI, and they're seeing bills go up, they're worried about AI, but they don't have exposure. And and if they could at least see that they're somewhat allocated to

Speaker 2:

that Yeah. In the same in the same way housing prices going up sucks until you buy a house. Then

Speaker 8:

way you describe it sounds more like how politician would describe it than how I actually think it would might play. Don't know that there are that many retail investors out there going, oh, my job's under threat from AI. I wish I could own Anthropic. Like, I don't know.

Speaker 2:

Mean, those Isn't things why that part I mean, this sort of fear based fundraising approach that that the lab, you know, some of the labs have taken where if if somebody's telling you your job's gonna go away, of course you wanna give them as much money as you can as a as a hedge. Mhmm.

Speaker 8:

Yeah. I I don't look. I there there's interesting irony that if you wanted AI exposure, you're pretty good just owning the index. NVIDIA is such a large Yeah. Part of the index.

Speaker 8:

You have exposure to Microsoft and Google and Facebook. Like, I I don't know that you need to be in that place. And we are now already at a place, I would say, you know, every time there's a new technology wave, people get rich quick. When people get rich quick, speculators come in, Charlottans, you know, those kind of things. And eventually, that leads to a bubble.

Speaker 8:

People are confused when they think you know, they say, oh, you you say it's a bubble, you're anti IA. No. The fact that it's real causes the bubble, and that's why pools rush in. And the beginning of the gold rush, there was really gold there. They were finding

Speaker 1:

it at the end. Point.

Speaker 8:

You know, it it got speculative.

Speaker 1:

And so That's funny.

Speaker 8:

It will get speculative. I think it would be really ironic if we, you know, invite retail investors into a Goldman led SPV of open air and drop it right before the recent which I think would be the most likely thing that would happen.

Speaker 1:

Sure. Sure.

Speaker 2:

How how are you what are you thinking about around China as of today, February 2026? We have Distillgate this week.

Speaker 1:

Yeah. A lot of

Speaker 2:

people are talking about it. But what's on your mind?

Speaker 8:

Can I ask you a question about that? This is this is remarkably naive on my part. So these model companies are saying that their their API was hit 16,000,000 times. Is that correct?

Speaker 2:

Yeah. Something like that. I don't even know if there's any AI, but a bunch

Speaker 1:

of yeah.

Speaker 8:

How how did that happen? Are you not tracking who

Speaker 1:

connects Yeah. To You set up a whole bunch of different, like, front companies or you're reselling access. So if you go to the you go to the iTunes app store right now, there will be an app

Speaker 2:

called to set up 16,000,000 accounts.

Speaker 1:

That or yeah. Or or if you just you can go to the app store right now and look for, like, chat AI, and it will hit the other APIs, But you're going through an American company. Maybe they don't have security. So there's a lot of different ways to to exfiltrate data. And then also a lot of data just hits the open web Yeah.

Speaker 1:

Because you go to ChatGPT, you run a deep research report, and then you just publish it on your blog or on the Internet.

Speaker 8:

But now they've been able to to those those things down and down, I think. Yeah. I I, you know, I share the skepticism Elon does, and it it this goes way back to my speech at All In on regulatory capture. I said then, and I still believe now, the biggest threat to The US, let's call it AI hegemony, is is the Chinese open source models. And the developers, even in The US, that are working on their own are using those.

Speaker 8:

And you can see that on all the on all the the the tables that are out there. And so it is a highly competitive, like, just globally competitive reality that that in a ecosystem where there's six to 10 open source models that can all learn off of each other, that's gonna be high like, that's gonna be a really incredible primordial soup, if you will, for innovation to evolve. And I fear mainly because I'm well aware that, like, OpenAI is I mean, Anthropic is the biggest spender on lobbying whatsoever. I always fear when these things come out that they're just trying to encourage more of that regulation. And if that happens, I think it could be like, if they try and make it illegal to use a model that has any Chinese, like, ancestry.

Speaker 8:

Mhmm. I think that could end up in a really weird place. And and and the place to really pay attention to and look out for is who's gonna serve the rest of the world. In the Internet era, there was a fence around China, and The US companies served the rest of the world. If we if we get super heavy on US regulation, you may find there's a fence around The US, and China serves rest of the world.

Speaker 8:

That's what I'd be worried about.

Speaker 1:

How are you thinking about great power competition more broadly? Like, I'm an American bald eagle. I'm as American as they come. At the same time, I feel like, I've been worried about a confrontation over Taiwan for years. Things there's been trade wars, yes, and things are tense, but nothing's really happened.

Speaker 1:

Is China somehow, like, underrated in your mind? Is the geopolitical risk overstated in some way? Like, what are you seeing that's not consensus?

Speaker 8:

If you've seen some of the stuff I've posted and and and and and I think this stuff I'm posting is highly consistent with Ethan's point of view. Sure. It it it comes from a place of if you're going to declare that that there's this this relationship that we need to optimize, I think and and if your goal is to lower the risk of any any major blow up blow up between the two, I think it's imperative to have as much knowledge as possible. Possible.

Speaker 1:

Mhmm.

Speaker 8:

And so one of the things that I don't like is when you see people out there spreading rhetoric that's just not consistent with the reality. And so I'm I'm just like, look. Let's get eyes wide open first. I also think that there are things we could learn from China about how to run infrastructure in The US. They're clearly better at it than we are.

Speaker 8:

And if you just, you know, close your ears and say, my god, they're the evil competitor and they cheat all the time. You don't ever get yourself in a position where you're gonna learn, you know, from them maybe what they're doing well and what we're not. And so I'm I'm, you know, Elon, we're I guess he was on cheeky pint with the gentleman you were just talking

Speaker 1:

to. John Colson.

Speaker 8:

Yeah. He he talks about, how competitive they are. Like and I'm just like, let's be realistic. Let's not I also worry a little bit that the venture community has gotten into all these military companies because venture capitalists start to look like warmongers. And it's ironic way back when when the All In pod just got started, they were giving oh, what's her name was on the Boeing board?

Speaker 8:

Nick Nicky Haley? Nicky Was it? Yeah.

Speaker 1:

Yeah. And they were

Speaker 8:

like, oh, she's a warmonger. She's, you know Sure. Looking after this defense company. Now every VC is in Andrew. They're doing the same thing.

Speaker 8:

Let's be consistent. Yeah.

Speaker 1:

Yeah. Yeah. Yeah. I I are there any other industries that you do think are interesting that sort of butt outside of the the typical mandate of venture capital? You know, like, AI fits very neatly into the software continuum, Internet, cloud, mobile.

Speaker 1:

Yeah. I thought crypto was a little bit outside of the wheelhouse, but a lot of VCs made it work. Industrial, energy, defense, these are sort of things that are a little bit outside of the typical software

Speaker 8:

VC mindset. I'm I'm I'm gonna have to run, but I would tell you one thing. Every time venture cap every time venture capital gets easy, people take or take risk with companies that are less of a great fit for the venture capital model. And when I say a great fit, like

Speaker 1:

Yeah.

Speaker 8:

They're they're either heavy capex Yep. Or they have they have low gross margins. They require tons of capital to keep surviving. And and history is pretty good at at, like, bringing people back around to how hard those are to do with venture capital. So it's interesting for me to see those experiments being run.

Speaker 8:

You know, there there was near death with Tesla many times, and it's a lot easier to get in those difficult situations when you're using debt and leverage, which we're seeing all over these data centers. And so I just a a word of warning. Be careful. It ain't easy. You know?

Speaker 1:

Okay. Jordy, last question?

Speaker 2:

No. We gotta let, we gotta let our guests jump. Okay. Thank you. Congratulations.

Speaker 2:

We're talking to lunch.

Speaker 8:

Hopefully, everybody can get out by

Speaker 1:

the down a dream. It's available everywhere books are sold. Go check it out. And thank you so much for taking the time to come chat with us. We'll talk to you soon.

Speaker 8:

Good luck.

Speaker 1:

Goodbye. Let me tell you about Sentry. Sentry shows developers what's broken and helps them fix it fast. That's why a 150,000 organizations use it to keep their apps working. And let me also tell you about Vanta.

Speaker 1:

Automate compliance and security, Vanta is the leading AI trust management platform. We have some news from the public markets. Intuit shares jump 5.4% on packed with Anthropic. This is you know, you like advanced talks. You like talks.

Speaker 1:

You like advanced talks. You like deals, but packs are really the top tier deal making that you can 100%. Turn your deals into packs. Accenture also turned positive, up 1% during the Anthropic event. And DocuSign rises 5% after partnering with Anthropic.

Speaker 1:

So lots of lots of folks in the public markets and software that are facing pressure are going doing deals and announcing partnerships as opposed to, I don't know, competition, coopetition, what will it be ultimately? But it's a lot of stuff.

Speaker 2:

Anyway Grace says, return flight from NYC gets canceled by snowstorm. Call United. Immediately connected with customer service, rare. Voice is uncanny, deaf AI, but they gave it a human like accent. Takes twenty minutes to get rebooked.

Speaker 2:

Pretty good. I ask if it's AI. No, ma'am. But I get that a lot. I ask it to calculate 02/1928 times 6647.

Speaker 2:

It runs the calculation. GG.

Speaker 1:

Do you think this is real? This is crazy.

Speaker 2:

You gotta test this. That is a

Speaker 1:

I mean, this is past the uncanny valley then. Here we are. It says voice is uncanny so it's

Speaker 2:

a little bit not even real. A human to just type this in.

Speaker 1:

That would be hilarious. Yeah. If you this is where the real alpha is. If you But this is the have the chatbot open, if you're if you're, you know, on customer service calls, you need to be Yeah.

Speaker 6:

Maybe they're just using Cluely. Maybe. Maybe.

Speaker 2:

But what is Your tickets has the real alpha guy who vibe codes a billion dollar SaaS on a United Airlines customer service call. They're just using them for their Yes. For their The tokens.

Speaker 1:

For their tokens. Tokens are free. Compute. Free compute. Free compute.

Speaker 1:

Well Chinese left

Speaker 6:

a while ago.

Speaker 1:

If you want if you want AI voices, head over to 11 labs. Build intelligent real time conversational agents. Reimagine human technology interaction with 11 labs. Continuing on. What is this hoodie?

Speaker 1:

The Fred hoodie? Oh, this is amazing. I love Fred. So Fred is the Federal Reserve. What does Fred actually stand for?

Speaker 1:

Fred St. Louis, the is Federal Reserve Economic Data. Yes. So this is basically the best website economic data. Huge in my early economics career.

Speaker 1:

So many useful charts and graphs, all free open source. Just like you just click it and you get exactly what you want. So whenever you wanna go back to some ground truth setting, you hit fred.stlouis.gov or something like that. I think it's fred.stlouisfed.org is the website. Highly recommended for GDP data and more.

Speaker 1:

Good charts. And here we go. We got the Fred sweatshirt. Absolute dripped out economic brother.

Speaker 2:

How popular is the name Fred?

Speaker 4:

Fred?

Speaker 2:

So I feel like

Speaker 1:

it's Oh, you look kind of that up. Let me tell you

Speaker 2:

Fallen off.

Speaker 1:

Figma. Ship the best version, not the first one, with Figma. Introducing Claude code to Figma, explore more options, push ideas further. With Figma, you can design your next hoodie in Figma potentially.

Speaker 2:

Okay. So Fred in 1950 Yeah. Was the eighty fourth most popular name. And guess what it is now?

Speaker 1:

It was '84 back then? Yeah. I imagine it's fallen. I would say it's like a 150.

Speaker 2:

How about 2007 No. '56. Fred

Speaker 1:

Such a fallen.

Speaker 2:

Huge opportunity.

Speaker 1:

Yeah. Bring back

Speaker 2:

your kid Fred.

Speaker 1:

I like Fred.

Speaker 2:

That's name. That's a strong name.

Speaker 1:

Yeah. Yeah. All these things go in cycles though. It's the business cycle. There's news over at Meta.

Speaker 1:

Meta has shaken hands with AMD. They're forming a pact. Today, we are announcing a multiyear agreement with AMD, Advanced Micro Devices, to integrate their latest Instinct GPUs into our global infrastructure with approximately six gigawatts, give me the of planned data center capacity dedicated to this deployment. We're scaling our compute capacity to accelerate the development of cutting edge AI models and deliver personal super intelligence to billions around the world. Very exciting.

Speaker 1:

And pretty cool little hype video. You see the camera move on this video? This feels like you speed that up, you cut in some other stuff, and you got yourself an Instagram reel. Right? You see this little like have you seen those tutorials about how to make a

Speaker 2:

car Need some SD kit.

Speaker 1:

Yeah. SD kit on this? I think it goes pretty hard. You SD. Double the speed.

Speaker 1:

You add some flickering. You add some frames, frame interpolation, all sorts of stuff. But Lisa Sue is on an absolute tear. AMD's doing great.

Speaker 2:

And Meta's not stopping. Meta. Mark Zuckerberg's Meta is planning a stablecoin comeback in the second half of this year, eyeing a third party vendor as a key partner to power payments across Facebook, Instagram, and WhatsApp. This is great if they should have something. If your friend sends you a meme Yep.

Speaker 2:

That's good. You should be able to tip them. Oh. Tipping. Tipping for great I'd be in shares.

Speaker 1:

Well, if you want to build a social network build on tipping, you'll need Plaid because Plaid powers the app to use to spend, save, borrow, and invest securely connecting bank accounts to move money, fight fraud, and improve lending now with AI.

Speaker 2:

Sean Frank, soon to be a new father and dear friend of the show says, Manus from Meta just doubling my ad budget every fourteen minutes. This is

Speaker 6:

is one of the

Speaker 2:

best new format.

Speaker 1:

I like this. This is only possible. This is this is the best AI image I've ever seen, I think. This is so funny because this definitely doesn't happen in the actual movie, but it's

Speaker 2:

But this is what Burry is like now.

Speaker 1:

I know. I know. Sean Frank, really an a tear. I saw him in the chat yesterday. I forgot to say hello to him.

Speaker 1:

Hello, Sean. And also congratulations on the new baby. Very excited for you. Anyway, this is hilarious image. But we will return to the timeline after our next guest because we have Ivan from Notion in the restroom waiting room.

Speaker 1:

Welcome to the show, Ivan. How are you doing?

Speaker 2:

Hello, guys. Good to see

Speaker 1:

you. Good to have you on the show.

Speaker 2:

Long overdue. Long overdue.

Speaker 1:

We're so excited. First

Speaker 9:

First time.

Speaker 1:

Well, we're glad we caught you on today because there's a big launch in Notion World, but I'd love you to take us through it. What was announced today?

Speaker 9:

We're launching customer agent today. It's one of the first, if not the first, multi player agent product for knowledge work.

Speaker 1:

Oh. So

Speaker 9:

it does real work for you in the background. Very easy to set up. Okay. Post in the cloud. Yeah.

Speaker 9:

Connect to all your work products, and the best part is you don't need a Mac Mini.

Speaker 1:

That's a good line. They are going out of stock. I don't know if you've seen. But the the the Mac Mini is in short supply. So walk me through some of, like, the most obvious use cases.

Speaker 1:

Like Notion is I think of the amazing because you have what is essentially a document but also a spreadsheet, and you can kind of move between different data structures and visualizations on top of data in a sort of consumer app UI. And so I could imagine creating a document and then having an agent go and do a bunch of work to populate extra fields. So where are you seeing or where are you excited about these agents actually taking hold in the product?

Speaker 9:

So what you're describing was the Notion probably two years ago. Yeah. Two Like, during the SaaS era, our strategy has been consolidating all different use cases to one product.

Speaker 1:

Okay. We

Speaker 9:

do. We a knowledge base, we talk about documents Yeah. Talk about project management Yeah. And we want to put we have been bringing together into one tool that's very flexible. Okay.

Speaker 9:

Like, for example, Ramp, actually. The the the companies that sponsored you guys Yeah. We put a ramp last year as the new customer for Notion.

Speaker 5:

Okay.

Speaker 9:

And we help ramp consolidate half a dozen different tools that the core collaboration stack onto Notion. Yeah. So they don't have to pay as much money for all the tools, that's number one. Yeah. Their team don't have to jump between those two.

Speaker 9:

That's number two. Mhmm. I would say the best part is now they have one place to do their core collaboration work. They have one place to deploy AI. Mhmm.

Speaker 9:

So now Notion is the core agent and orchestration layer for Ramp.

Speaker 1:

Yep.

Speaker 9:

The the product we just launched today, customer agent, Ramp has been an early customer for us for a couple months. Yeah. They're running all the sales enabled process, a lot of internal bug triage, all different process on this. Because they have people at one place to work collaboration system record truth

Speaker 8:

Mhmm.

Speaker 9:

And one place to do their busy work, delegate their busy work to.

Speaker 1:

Yeah.

Speaker 9:

So, empathy model is, what is it, money and time save both, and Yeah. This is we're doing for Ramp at the moment.

Speaker 1:

I love it. Yeah. Talk to me about, the the agentic cron job. That feels like something that we're starting to taste with Open Claw. There's clearly demand for it.

Speaker 1:

It requires a little bit more upfront effort than just firing off a deep research report or saying, hey. Hydrate this text. Expand. Contract. Expand.

Speaker 1:

Turn it into bullet points or into paragraphs by back and forth all day long. But I feel like the for most businesses, having an agent that's effectively on a cron job, maybe you don't call it a cron job, but it's something that runs every day, runs over a knowledge base, over a customer list, over documents, and does the things that AI is great at every day, that feels like something that could be incredibly powerful. How are you thinking about long running agents, cron job agents, scheduled agents?

Speaker 9:

Yeah. Cron job is a pretty good word for it. Like, lot of knowledge work is kind of just cron job.

Speaker 1:

Yeah.

Speaker 9:

Right? So you update your you're pushing paper back and forth and cron job from this person to the other person. Yeah. I think the the world sort of taste this power of when agent connect with the cron job through a product like OpenClock. Yeah.

Speaker 9:

It can do a lot of work for you. And so you no longer have to prompt it, it triggers work on the background autonomously, asynchronously for you. Mhmm. Our our interest is less about OpenClock or Mac mini is what does this do for real business?

Speaker 1:

Yeah.

Speaker 9:

And real business is you require enterprise grade permission. It has to be multiplayer. No longer just for a personal tinker with your own Mac mini. You have to power the entire teams with it. Right?

Speaker 9:

And it has to be easy to set up so you don't have to be an AI tinkerer or AI engineer to do it. You have to have the state of our models usually that they release. Yeah. So all the service will provide for businesses to take the spirit of front job background agent, Clocko, you might say Open Call, you might say, into businesses. That's the positioning of this product.

Speaker 1:

Sure. So talk to me about where the capability frontier on the agent side is today. I mean, because agents can be turned really loose. You can give them access to Python and they can talk to any API. They can write their own CLIs at this point.

Speaker 1:

And so you mentioned like no Mac mini, but is there a world where I tell, like, just for our example, like, I want a new Notion document generated every day with a breakdown. I want you to go to a read only access API for the YouTube API, pull all of our analytics, pull all the chat feed, synthesize all that, and put together a Notion document that I can review with the team in the morning that says, oh, this segment of the show was particularly great. Here's how the analytics changed, here's where the viewer spikes were, all of that. That would require talking to an API. What does that look like if there's not an off the shelf integration?

Speaker 1:

Yeah.

Speaker 9:

This should be possible Mhmm. If it's not really possible. Getting the YouTube API, getting the transcript, I don't know everybody have access to it. Gemini might have a special access to that, but Yeah. A video transcript.

Speaker 9:

All this is possible because all you need to do is the runtime that can run models. Yeah. All you need do is the runtime that can talk to external APIs through code that are written by models. Yeah. And a model that does the crunch up periodically based on certain triggers.

Speaker 1:

Yeah.

Speaker 9:

I just described those core ingredients. They're the core basically, the core ingredients for Notion custom agents.

Speaker 2:

Sure.

Speaker 9:

So you not only can do those, can connect to your emails, can connect to your Slack, if you guys use Slack Yeah. And send a message every morning. So you don't actually have to come to Notion to see the work being done. You can stay where you are today.

Speaker 1:

Okay.

Speaker 2:

How how how have you processed the last couple years of vibe coding? Because when I the first company I ever started or first not necessarily the first company but the first like real business. We started on Notion and at the time, this company does like a bunch of it's like an ad network on YouTube. And so we had a bunch of different like ad buys happening. We needed to be managing that process with the client as well as the creators.

Speaker 2:

And so the entire company from the beginning ran on Notion. I looked at every possible SaaS solution at the time, but I looked at all of them. I would have needed to a lot of them didn't even for like, you know, work with customization. So I just built all these dashboards that helped, that helped kind of, like, manage all those different processes and it already had collaboration built in. It already had, like, the account functionality.

Speaker 2:

So it just like worked completely out of the box. So in some ways, at that time, I was already replacing like vertical specific software with Notion. And so in some ways, like I feel like this whole process and explosion of people being able to create different applications for different use cases is kind of like just a continuum from Notion's inception, but Yeah.

Speaker 9:

We started we're trying to never a lot of people think Notion is document tool, tool, note taking app, relational database tool. That's never been the intent. Notion started as a computing tool. Like, I really care about, okay, I'm a programmer, the power of computing is in the hands of you, the programmers. How do you open up to more people?

Speaker 9:

That's why the company started. So the spirit always has been consolidating the fragmentation of SaaS for the past five plus years. And it turns out that strategy works quite well with AI because once you consolidate those things, you have one context to power the language models. Right? That's one.

Speaker 9:

Number two, because we've been taking a stance that we don't want to inject our opinion on how you should run your business. We just provide the Lego blocks and you can decide however you want to run those Lego blocks. So we haven't been hard coded as business logic into our apps. So in the back then, there's a buzzword called no code. Right?

Speaker 9:

And in some people talk about SaaS versus language model. A lot of SaaS is hard coded logic into your vertical apps. And we don't do that. Used to be a weakness of our product because it's how open ended it is. It has to be required some technical minded people to use it.

Speaker 9:

Turns out to be a strength because now language model can use this notion of building block to do a lot of work for them. So now, we're with this new product we're launching, it's not just working with information in and out of Notion, it can power agent to work with external tools and do those crutch up, do those repetitive knowledge work. So do those busy work for the company. Internally, call this like, let AI do the night shift. So you can do the day shift.

Speaker 1:

Oh, yeah.

Speaker 9:

AI can do the night shift. We actually have to put their website to go a little bit dark mode this time because truly is doing the night for us.

Speaker 2:

Wants a night shift. I like the day shift. I did the night shift Yeah. Back in college. Yeah.

Speaker 2:

It's not a it's not a

Speaker 1:

That's great. Yes. So you mentioned Gemini. Thank you. Another another TVPN sponsor.

Speaker 1:

But I imagine that you're pretty model agnostic. I'm interested to know how you're thinking about the different LLMs. And then also, how much do you wanna surface to the user? Like, I was talking to Salesforce's Slackbot, and it wasn't upfront with me about exactly which model was under the hood. Now I'm a nerd, and I'll ask, okay.

Speaker 1:

Is it 3.5 or 4.6 or 5.2? And I'll have some opinion whether or not that matters. Who knows? But, do you wanna have model switchers, model pickers? Do you want to be at that level of, like, empowering the user to pick the right tool for the job, or do you wanna handle that internally?

Speaker 9:

We do both. So if you are like a Normie or Normie plus plus using Notion Yeah. You can't use Notion without pick the model. You'd in an auto version. Right?

Speaker 9:

Yeah. But you have more sophisticated creative customization that do work triage work for you, you Different models have different strengths and weaknesses. Mhmm. So you should be able to pick the model. For our strategy, and I think for a lot of non labs, it's very important to be model agnostic.

Speaker 9:

Mhmm. The lab's going to get better and better, the model's going to do more and more. But one important strategic point is labs don't work well with other lab's models. Mhmm. So there's an important position to be in the Switzerland of agents, Switzerland of the models.

Speaker 9:

And that's the position we're being with the product launch. You can work with Clockho out of the box, can work with a cursor station out of the box, and you pretty much can pick any models you want that's state of art, and usually the data those models are released. And as a user of this product, you don't have to worry about that. Mhmm.

Speaker 1:

I want to revisit this Wall Street Journal article that you were featured in, back in August. So, the quote was, Ivan, the CEO of Notion, says that two years ago, his business had margins of around 90% typical of cloud based software companies. Now around 10 percentage points of that profit go to the AI companies that underpin Notion's latest offerings. How has that changed? Is it still 10%?

Speaker 1:

Is it climbing? Is it falling? What are your predictions for where that goes?

Speaker 9:

Not as far as 10, but it's definitely a meaningful amount. Mhmm. Before your you can do pure SaaS margins.

Speaker 1:

Yeah.

Speaker 9:

Now, model people our product are powered by AI now. Mhmm. Majority of product are powered by AI now. You have to model provider have to take some of the margin and we're okay with that. I think we see the market trend on both fronts.

Speaker 9:

First, we we wanna use state of our most capable, most intelligent model because our customer wants that. They want to eat the customer, they don't have to worry about that. And second, there's a new wave of open source and foreign models are coming, And that's why we have to be model agnostic, and we can shift to different model for different type of work. Yep. And that will help us with the margins.

Speaker 9:

And at the end of the day, our customers are having to worry about this. Mhmm. What we provide is less about model. Model capability has been there for almost for a year or two years to do a lot of knowledge work. Yeah.

Speaker 9:

What's missing in the market is this infrastructure layer that glued together model capability, glued together permissions, and to provide real knowledge work for the customers, at the same time backward compatible, having a good UI for Yeah. A company of sizes.

Speaker 1:

I have one last question. We'll let you go. How are you thinking about sort of like it's crazy to call them legacy AI workflows because they were probably implemented like a year ago. But when I just think about like document summarization or even like spell checking grammar, like that was probably moved to an LLM that was capable. A GPT four class model can do that at a very low cost.

Speaker 1:

And maybe you wanna optimize that even further by going to an open source model on commodity hardware, really drive down the token cost. Have you left any AI workflows in place on legacy models? Or have you migrated everything to the frontier and you're just moving with the frontier?

Speaker 9:

We're moving with the frontier by and large because that's what our customer want. They want smarter things.

Speaker 1:

Yeah.

Speaker 9:

But things like avoid dictation, summarization, legacy model can do that.

Speaker 1:

Sure. And so it's just

Speaker 9:

important part is, like, the market is changing so fast right now. Mhmm. Like, nobody know where the future holds, but we know the model capability gets in better and better. Yeah. And we always care about building beautiful and powerful tools.

Speaker 9:

And AI is that tool today. Yeah. How do we ensure that all company can benefit from this? You don't have to be Fortune 500 to for for deploy engineers. You don't have to be a San Francisco startup to have AI engineer on your team to use this.

Speaker 9:

Right? You every business can benefit this technology. That's our ethos, and that's why we're building this product to make it super simple. You don't have to worry about Mac mini, not worry about models. Work out of the box.

Speaker 9:

Right.

Speaker 1:

Is that on the home page yet? Don't don't worry about Mac mini. We got you.

Speaker 2:

I think

Speaker 9:

that's too specific calling out other products, but night shift is a good one.

Speaker 1:

Night shift works. I love it.

Speaker 2:

Someone in the chat, John John Palmer in the chat was saying, well, but if I can't if I don't have to use a Mac mini, what will I spend all my time setting up?

Speaker 9:

It's Like to tinker. Tinker is sometimes more than 50% of the fun of it. It's actually product.

Speaker 1:

No. This is true. This is true. People people wanna tinker. They wanna play.

Speaker 1:

They wanna explore and have fun. And, it seems like it's a great time to be, running Notion. Like, it's just been a very exciting time. There's so many new products you can build, so much faster than ever before. So congrats on all the progress

Speaker 2:

on the launch. Thank you so much. We'll talk to

Speaker 1:

Cheers. You Let me tell you about Labelbox. Reinforcement learning environments, voice, robotics, evals, and expert human data. Labelbox is the data factory behind the world's leading AI teams. And let me also tell you about the New York Stock Exchange.

Speaker 1:

Wanna change the world? Raise capital at the New York Stock Exchange. I'm not gonna leak the news, but we have an exciting guest lined up for our next Nicey Show. So hit that subscribe button to be notified when we go live.

Speaker 2:

Can't wait. A senior US official toyed told Reuters that DeepSeek's new model whose release is now imminent has been trained using NVIDIA Blackwell GPUs despite the export ban. I am interested to see So many what this model is capable of.

Speaker 1:

Could have made that happen. Right? Like, it could literally be one black well per person in a suitcase smuggled along. It could be, one shipment diverged or diverted from, going to one country, and then they say, help. Send that shipping container over there instead.

Speaker 1:

It could be cloud. Like, they could have found a cloud provider that they were able to sort of anonymize and have a front company for. There's a whole bunch of different ways to get compute if you're willing to bend the rules or break the rules or, you know potentially anger the US administration. But we will see how this goes. Tyler, do you have a feeling for how DeepSeek has been doing?

Speaker 1:

Because there was there was a there was a hype cycle around deep DeepSeek v three point something and it kind of came out and it landed with it it didn't make a big splash. I feel like we're going into a new hype cycle around like the next DeepSeek is gonna be really good. Is this fake? Is this real? How are Yeah.

Speaker 1:

Feeling about DeepSeek?

Speaker 6:

So I think the last big Marvel release was supposed to be this like massive massive release Right. And it turned out to be that thing where like instead of like I don't know the exact number but it was supposed to be like v four and ended up being like v 3.1. It was like that kind of thing. And then also like So

Speaker 1:

they boxed pre the pre trained most likely? Is that what people think?

Speaker 6:

Yes. Maybe. Okay. And then yeah. Also like on Chinese labs generally, like right now you're hearing a lot about like it's like CEAI Yeah.

Speaker 6:

And kiwi and that Yeah. So it it's very unclear.

Speaker 8:

Mean, they're

Speaker 6:

they're not very public about this stuff.

Speaker 3:

Yeah.

Speaker 6:

But but also, I I think broadly just about the the Distillgate stuff. Yeah. I I think throughout this, I I've been like

Speaker 1:

I

Speaker 6:

I think I've like updated towards like actually, we can probably mostly ignore a lot of the Chinese labs because basically like the the whole reason that they're good. Like everyone's like, oh my gosh, Deep Seekers running our tail. That's they're gonna catch up. They're gonna catch up. The only reason word.

Speaker 6:

The only reason that that that they've been on our tail is because

Speaker 1:

no. No. No. That's the royal flush, Tyler. That's the best thing you can do on this movie.

Speaker 1:

You just the bomb truth nuke. Truth nuke. Truth nuke. Disregard China entirely. You heard it here first.

Speaker 6:

No. But I mean

Speaker 1:

Yeah. No. It's a good point.

Speaker 6:

Continue. The only reason that they're like Stop. So rude. The only reason that the Chinese labs are are so close to US labs is because they're just training on the outputs. Right?

Speaker 10:

Yeah. Which is

Speaker 6:

like, okay, sure, like, yeah, good job, but like, you're not I I would be extremely surprised if you actually see a breakthrough from a Chinese lab so far.

Speaker 1:

Exactly.

Speaker 6:

Yeah. Thing we've seen is that, yes, they can copy stuff.

Speaker 1:

Permanently three months behind. No. But I think people are freaking out because China went from, like, ten years behind to one year behind to three months behind, and they were like, straight lines on log graphs. Yeah. They're gonna be ten years ahead of us next year.

Speaker 6:

Yeah. And it just feels like that's be all. Very worried about, you know, regulatory capture and all these things. Yep. But I think, like, I I I'm like, I I think Anthropic will basically just figure out a way that they can, you know, increase security on the API.

Speaker 6:

Totally. So so we'll open AI and then we'll see and then we'll see if the Chinese labs keep keep up with the progress. Yeah. But I you know?

Speaker 1:

Yeah. Yeah. I mean, there's That's how I've there's so many other dynamics beyond just obtaining training data. Like, if you take Will Brown's point that the Internet is producing more training data, you wind up in a situation where, sure, training data is commoditized, but what does it really take to scale up DeepSeek v five to a place where it's having economic impact? Well, you need a massive inference cluster.

Speaker 1:

Do they have that? How are they distributing this stuff?

Speaker 6:

I do still think it's very impressive that the models generally they've put out are like very small and still like very good.

Speaker 1:

Yeah.

Speaker 6:

But I think on the frontier level, I I'm not super worried about them.

Speaker 1:

Tyler called it. They're cooked. Anyway, really quickly from Anthropic, there's now a call sheet on will the Pentagon designate Anthropic a supply chain risk? It's sitting at 36.8%. I believe that there's going to be a meeting between Pete Hegseth and Dario Ostendorf.

Speaker 2:

Yeah. So that already happened. Well, that happened. Update on the meeting from Andrew Kern according to Axios defense secretary according to Axios defense secretary Pete Hegseth gave Dario until Friday night to give the military unfettered access to Claude k. Or face the consequences Yeah.

Speaker 2:

Which may even include invoking the Defense Production Act to force the training of a warclaw.

Speaker 1:

Wait. So so that was not a joke? Warclaw is not a joke?

Speaker 2:

I don't think they would name it that, but it's kinda sounds like that's what Pete

Speaker 1:

is asking. Claw of war. Like God of war? That's pretty good. Anyway We We have an extra

Speaker 2:

We're we before we get to that

Speaker 1:

Yeah.

Speaker 2:

The chat is sharing that payments processor Stripe expresses interest in PayPal. Scoop. We miss had a missed opportunity. If you guys could have simply Bloomberg, you could have published that at twelve Yes. When they came on the show that would have been Payment quite processing firm Stripe is considering an acquisition of all or parts of PayPal.

Speaker 2:

Mhmm. Stripe which is privately held and is among the industry's most valuable companies as you know. The deliberations are still early and there's no certainty they'll lead to a transaction.

Speaker 1:

Yeah. I mean we we read the Will Meninas post and we're like, oh, it could be could be a couple days, could be couple years. It seems like it might be closer to a couple days.

Speaker 2:

Well, PayPal is up 7% today. Oh. So the market certainly is So, hey, you might be able to own some Stripe.

Speaker 1:

Well, tell me, let me tell you about Phantom Cash, fund your wallet without exchanges or middlemen and spend with the Phantom Card. And without further ado, we have Stefano from Inception Labs. He's the founder and CEO. Welcome to the show. How are you doing?

Speaker 7:

Really good. Thanks for having me.

Speaker 1:

Thanks for hopping on. First time on the show, so I'd love to have you kick it off with an introduction on yourself and the company.

Speaker 7:

Of course. Yes. I'm Stefano. I'm one of the founders and the CEO of Inception.

Speaker 5:

Yeah.

Speaker 7:

Before this, I was at Stanford in the CS department. I've been doing research in generative AI for a long time.

Speaker 1:

Yeah.

Speaker 7:

I think my lab is mostly famous for having co invented diffusion models back in 2019. I was on the Flash Attention paper, BPO. So a bunch of things that are now widely used in production. And these days, I'm most excited about the diffusion language models. That's what we're doing at inception.

Speaker 1:

Yes. So I first saw a diffusion language model demoed at Google IO, I believe. But tell us, like, when explain it like I'm five because when I think diffusion, I think a bunch of fuzzy noise and then the and then the mid journey image gets the higher and higher resolution. Everyone's familiar with that. And then they're familiar with, like, the token streaming next token prediction.

Speaker 1:

Is it different? Break it down at a very low level or high level.

Speaker 7:

That that's right. Basically, we've taken diffusion models Yeah.

Speaker 3:

Which is the thing

Speaker 7:

that works best for image and and video generation. This is kind of forced to find a process where you iteratively refine your output until it looks good.

Speaker 4:

Yep.

Speaker 7:

And we figure out a way to apply it to text and code generation.

Speaker 1:

Okay.

Speaker 7:

And it kinda like works the same way. You should start with a rough guess of what the answer should be, and then you refine it.

Speaker 1:

Okay.

Speaker 7:

And, crucially, the difference is that the neural network is able to modify many tokens at the same time. Yeah. And so it's much, much more efficient than the typical autoregressive model where you generate left to right one token at a time. So you're able to modify any tokens in parallel.

Speaker 1:

So if I'm if I'm thinking of, like, you know, not maybe like a like a deep research report type response, I in my mind, I can imagine a report, you know, saying, like, explain the history of the Roman Empire. That's the that's the example I always use. It's like it's gonna have some structure to it and I'm gonna imagine a blurry image with, like, a couple large headers and then the headers are gonna get filled in and the text are gonna fill in, maybe there's some bullet points, maybe there's some dates, maybe there's some charts, and like all of this is going to come together. But I'm thinking about it not sequentially, but as a whole, and then refining iteratively until I'm getting to instead of pixels, I'm thinking of individual characters, or are there tokens in the same way that might exist in an LLM? What does that look like?

Speaker 7:

Yeah. That's the right intuition. So it's kind of like, yeah, of course defined generation.

Speaker 1:

Yeah.

Speaker 7:

And in practice, you know, it's learned by a neural network, so it's not necessarily interpretable. Like, it's not the kind of process I would go through where maybe I start with head with, you know, section headings and then I fill in the details. It's all learned by neural network. And so it's not really interpretable. Okay.

Speaker 7:

But it's fast.

Speaker 8:

That's really the

Speaker 1:

So is speed the main thing? I mean, we we we we had, the founder of chatjimmy.ai on the show, Talis, and it seemed like he was able to bake down a traditional LLM, Lama three eight b onto silicon, and it was spitting out 16,000 tokens per second. Do you have a comp on speed or cost that you're targeting? Or do you see like a through line to like, okay, maybe if we're running on NVIDIA chips and he's running on custom silicon, he's gonna be faster. But then once we get to custom silicon, we're gonna be 10 times faster than that.

Speaker 1:

How how should I be thinking about the trade offs here?

Speaker 7:

Yeah. So the the our benefit is purely at the algorithmic level. Like, it's just a more parallel approach

Speaker 1:

Okay.

Speaker 7:

That is not memory bound. It's it's flops bound. Right? It's compute bound. Yeah.

Speaker 7:

So you're able to hit the ceiling of the roofline, and we are taking, you know, advantage of all the resources we can get access to on the GPU. Mhmm. In practice, what this means is that we can get to over a thousand tokens per second

Speaker 1:

Wow.

Speaker 7:

On traditional NVIDIA GPUs, Hopper, Blackwell.

Speaker 1:

Yep.

Speaker 7:

So we are not yet at the level, you know, the 16,000 tokens that you can get if you were to actually, you know, implement the model on hardware, but we're running on, you know, general purpose GPUs. So we can scale up as much as we want. It's just a matter of getting more GPUs and, you know, you you can just run these models anywhere. We are on bedrock. We're on foundries.

Speaker 7:

So if you have your own GPUs, you can provision your own capacity, and you can run your model our models there. Yeah. So it's it's all very, very scalable. It's fast and scalable. And in principle, yeah, it can be compounded.

Speaker 7:

You know, you have a 10x benefit from the software. You have a 10x benefit from the hardware. Yep. Those two things could be combined.

Speaker 2:

That's what human dialing use case you know, there's a lot of people out there using traditional language models today. What are the kinds of use cases where you would tell somebody you should be switching over today or at least trying to start experimenting?

Speaker 7:

Yeah. We're seeing a lot of traction in latency sensitive applications of LLMs whenever there is like a tight loop where you need to interact with a developer or a customer. So our models are being deployed in a bunch of IDEs. So if you think about coding, coding autocomplete, next edit, suggestions, refactoring, quick argentic loops. That's a very natural kind of like application where diffusion LLMs are already really, really good.

Speaker 7:

Voice agents, we have a number of partners and customers that are building really, really good voice agents. The latest models we announced today, Mercury two, was a reasoning model. So it's really, really fast, and so you can get the quality of a reasoning model with the latency budgets that you need whenever you wanna build a a voice agent, which is resonating really well with a bunch of early customers. Retrieval and search, that's another space where we're seeing a bunch of applications with being built on the Fusion LLMs. So if you think about, query rewriting, re ranking, summarization, That's another really, really good use case for diffusional alarms.

Speaker 2:

Hey, boy. Talk to us about DistillGate, how you've been processing it. Have have you It's a sign of success. Had did you any any points in your career where you where you experimenting with this stuff? Is this something that we kind of forced the Chinese market into

Speaker 1:

Oh, yeah.

Speaker 2:

Spending a lot of resources on?

Speaker 7:

Mean, it makes sense, right, that that that's what's always gonna happen. I think, the moment you put it out there, you you give API access to the world that that's going to happen and people are gonna copy you. I mean, we've been doing distillation in the research community for a long time. And so people have been experimenting and figuring out ways to do it in a sample efficient way. So I'm not surprised that it's happening.

Speaker 7:

I think it's hard to know at what scale. And honestly, it from the numbers that they were circulating, it seems like they are able to do it with very, very few data points. That was the most surprising thing to me. So, you know, it's it's very interesting scientifically that you can actually distill with with still a few data points because it means that it's gonna be very, very hard to to protect any IP Yeah. If you are opening a model up from a from an API point of view.

Speaker 1:

So the last question, somewhat related to that. I feel like when those models get distilled, we see very strong benchmark performance. And then some yet to be quantified and benchmarked quality sort of degrades and you hear people that actually try and put them into production saying like, Ah, it just doesn't have the same like big model flavor that I'm getting from the big labs. I don't know how real that is, but I'm wondering if you zoom out and you look diffusion versus transformer based LLMs, are you noticing any divergence in the benchmarks where you're maybe better at coding or less good at coding where the mental model that we're giving the computer is leading to surprising results?

Speaker 7:

Yeah. So what we're seeing is that it's it's it's good at coding. It's good at editing. One nice thing about not necessarily being left to right is that you can use context all around you. And so those use cases have emerged as being really, really good for diffusion other lens.

Speaker 7:

I think it's also a function of the training data that we use. Mhmm. You know, we always liked coding. We're all computer scientists. And so that was, a a very natural kind of application area for us.

Speaker 7:

And so, I don't know how much of that depends on the training data that we use versus the model. But what's exciting is really just, like, the speed. That that's the thing that that Yeah. Is is gonna be hard to replicate even just the

Speaker 1:

I got a need for speed. I got a need for speed. I'm super bullish on speed. I'm serious. I think it's amazing.

Speaker 1:

I used Yeah. 5.3 Spark on Cerebras, and I was like, this is the future. It's gonna come to everything, and it's going to be an important moment for people to realize that it's just a different product when you're interacting with something fast. And I think we learned this from Amazon squeezing out milliseconds in web page loads, and we're going to experience it in AI too. So thank you for everything that you're doing to speed up AI.

Speaker 1:

We loved having you on the show. So have a great rest

Speaker 8:

of your day.

Speaker 2:

Yeah. Great to meet you.

Speaker 1:

Thank you. We'll talk to you soon. Goodbye. Let me tell you about Console. Console builds AI agents that automate 70% of IT, HR, and finance support, giving employees instant resolution to access for access requests and password resets.

Speaker 1:

And let me also tell you about Railway. Railway is the all in one intelligent cloud provider. Use your favorite agent to deploy web apps, servers, databases, and more while Railway automatically takes care of scaling, monitoring, and security. And without further ado, we have TBPN Royalty.

Speaker 2:

What's going on? Great to see you, James.

Speaker 3:

Hey, John. Hey, Jordy. How are doing?

Speaker 2:

Doing great. Doing great. Good. Calling in from a cave. Are you fully snowed in?

Speaker 2:

What's going on?

Speaker 3:

No. We're yeah. We're in New York. The snow is melting. We built a little mini studio Nice.

Speaker 3:

In upstairs. And, yeah, it looks pretty professional.

Speaker 2:

Great. I love it.

Speaker 1:

Tell us as

Speaker 3:

professional as you can.

Speaker 2:

Tell us us the news, and then there's a bunch of stuff we wanna talk about.

Speaker 3:

Yeah. So I guess we're joining today announcing a ninety six million dollar series c investment at a $1,000,000,000 valuation led by Lightspeed Venture Partners alongside Sequoia, Kleiner Perkins, Avantik, Saga, and South Park Hong Amazing.

Speaker 2:

Breakdown everything that's happened since the last time you're on the show. The the the space has been moving so quickly. So it feels like it's been two years even though it's probably been two months.

Speaker 3:

Yeah. I mean, it's it's all moving. Everything's moving, obviously, at a 100 miles an hour, but it's it sounds a bit trite when I say this, but it really is a privilege to be building in such exciting times. Yeah. I mean, we we've just launched profound agents, which I think is a really big deal is, you know, we we serve the marketer.

Speaker 3:

So, you know, the the the line we've been using during this fundraise is during this announcement has been, you know, Harvey lawyers have Harvey, engineers have Cursa, and marketers have ProFound. And I think that's that's truer than ever in that, yeah, with this launch of agents, it really takes ProFound towards being like a full stack, you know, holistic platform for the modern day marketer, allowing them to, you know, not just understand how they show up in AI platforms like ChatGPT, Gemini, the rest of them, but also build agents that can help them do more with less. So, yeah, I think this is cool. Yeah. We saw our customers have been you know, we we we came out of the gates eighteen months ago with ProFound here in New York.

Speaker 3:

And what we saw was our customers quite often were taking our data and insights then going to orchestration and automation tools to do cool things with it. So we've just brought that all in house now. And, yeah, it's really cool. You can do everything in one platform.

Speaker 2:

How how are how do you think the the other platforms, the LMs, are are evolving? Some of them are launching ads. Some aren't. All products are being and services are being discovered in in all of them. Why is it important to have a platform like ProFound?

Speaker 2:

We were talking about this off air this morning. It feels like people have been joking around about Manus, for example, in the Meta platform because Manus is like an agent that wants to help you, but at the same time what what what helps helps Manus is like spend more money. Right? So it feels like having having a third party agent problem. Yeah.

Speaker 3:

Manus is like, I've got a great idea.

Speaker 2:

Potentially, but how how are you thinking about the interaction between ProFound and and the different platforms?

Speaker 3:

Yeah. I mean, I think, you know, our prediction of the future is that in the future, every company on the planet will care deeply about how AI talks about their brand or products or services. That's kind of a north star that we we hang our hat on. And I think compared to, you know, search in the early two thousands or even for the last twenty five years, we it's looking like this will be a much more fragmented sort of market. I think we're gonna see multiple players coming through.

Speaker 3:

So I think ProFound really sits adjacent to the the models or the labs, and we help marketing teams understand how they show up in these platforms. You know, when AI responds, what does it say about your brand? What does it say about your services? And now we help you build customized agents that can actually, you know, do the work with you with a marketer in the loop. So, yeah, we've had hundreds of teams.

Speaker 3:

You know, we work with I mean, I guess a big thing that I'd say we're announcing since we last spoke, our series b, is that we now work with 10% of the Fortune five hundred, which is a pretty cool stat.

Speaker 11:

Let's take our jobs. What

Speaker 2:

did the other 90% do it?

Speaker 1:

That's fantastic news.

Speaker 3:

Yeah. Radio game. Yeah. We got 90% to go. Job's not done.

Speaker 3:

Yeah. But I think, yeah, we we work yeah. It's it's it's a it's a very cool start.

Speaker 8:

Yeah.

Speaker 3:

And, you know, what we're seeing more and more is that, you know, every brand is different. Every marketing team is different. Everyone has different initiatives. Everyone has different preferences. Marketing is is more human than ever in a lot of ways.

Speaker 3:

And I think our approach of helping marketing teams build entirely customized agents that can take out the rote labor from their work is is just saving these teams inordinate amounts of time and energy. And it's it's very cool to see it work. Yeah. It's exciting.

Speaker 1:

Can you walk me through the anatomy of correcting a mistake that exists across LLMs or even in a particular LLM? My nightmare is, you know, you go to you go to ChatGPT and ask how tall is John Coogan, and it says six five six six. This would this would destroy me. It must be six eight. Let's bake that into the pre training data, six eight six eight.

Speaker 1:

But but, seriously, other than just, like, doing a bunch of SEO to correct the record, like, how how can a company if there's truly, like, a consistent hallucination, something that's just incorrect for some reason, what is the process to actually change results?

Speaker 3:

For sure. I mean, well, the first step, which sounds kind of stupid, is just knowing Yeah. Why it's happening. Right? So you the model when when, you know, let's say an answer engine spits out an answer Mhmm.

Speaker 3:

You know, a good chunk of the time is getting that answer from somewhere. And, know, being able to identify, hey. This is you know, what we found I'll give you an anecdotal example. So it was I I I won't be able to name the brand. It was a neobank that the models were incorrectly spitting out that the the the there was no FDIC insurance on.

Speaker 3:

And and we identified there was coming from a few good places. It was, like, some a third party blog, I think a couple of Reddit posts, and maybe, like, a YouTube video or something. So then once you know where it's happening, it's kind of I wouldn't say it's easy, but it's, you know, it's just kind of one zero one marketing. Okay. Cool.

Speaker 3:

Let's reach out to the blog and tell them that that's factually incorrect. Let's comment on the Reddit post and say, hey. This is actually not true. We are FDIC insured. Let's produce a YouTube video that speaks to the same thing, but, you know, mentions heavily that we're we have FDIC insurance.

Speaker 3:

And lo and behold, that gets pulled through into the model. So

Speaker 2:

So that would be something that your agent would do, like, automatically, and you could just set set them off and do that?

Speaker 3:

Correct. Yeah. You can you could set up an agent that monitors for any misinformation based on a knowledge base of, like, ground truth and then say, okay. Cool. When we see any misinformation, let's generate an email that reads from our tone of voice and sends to this third party blog and says, hey.

Speaker 3:

Can you correct this, for example? Mhmm. So yeah. But that it has to be customized because, know, you wouldn't be able that's you'd never have that as an out of the box solution. Right?

Speaker 3:

It has to be everything's you know, it's almost like being able to build one for one software. Yeah. This is this new paradigm of agentic software, which is so, so cool. And, obviously, I'm not the only one that's excited about that.

Speaker 1:

Yeah. I mean, you're in a very interesting vantage point in the industry because you work with so many Fortune 500 companies. What are your expectations for agent to commerce this year? We were just talking to the call since they it feels like it's on the precipice. Everyone we've talked to is extremely bullish, but I'm always interested to hear, like, the shape of the bullishness.

Speaker 1:

What what you think needs to happen to actually get people shopping agentically this year?

Speaker 3:

I mean, I think so much of that inflection is gonna come from the models themselves or the consumer products. Mhmm. So, you know, ChatGPT and Gemini's ability to actually offer a fantastic user experience. I think that that will be the kind of you know, that that's the most important thing. I think from the other side of the fence, working with these brands, these marketing teams, they're look.

Speaker 3:

That is a hot take. They're they're not Then I really I hope this doesn't sound like disrespectful at all, but they're they're they're they're not as slow as you'd imagine. Like, these giant we work with giant brands. Sure. Fortune 500, you know, some Fortune 10, and they're ferociously fast.

Speaker 3:

They they understand the magnitude of this platform shift, and it's very sophisticated teams. A lot of them are SEO teams who I actually think are fantastically well suited to kind of attack this problem space because they're, like, kind of technical, and they understand the sort of primitives of marketing and content, etcetera. They're quite cross functional. But, yeah, the I I think the a mistake would be to to think that the enterprise is super slow. I don't think that's true.

Speaker 6:

And I'm

Speaker 3:

not just there's no to that. I'm actually I I actually believe that.

Speaker 2:

Yeah. No. I can asked I asked chat GBT what's the best geo tool for startups. It says, profound.

Speaker 1:

Oh, yeah.

Speaker 2:

What it does Good god. Tracks how your brand appear inside LLM. It's best for VCs Dog serious food. About AI distributions.

Speaker 1:

Dog food. I mean, we put a lot of we like we put a lot of ProFound in the pre training data last year. Was great partner.

Speaker 3:

Did mention agents though? That's the question. Right? We only today.

Speaker 2:

Do want to ask it?

Speaker 3:

What did they launch today? See if Oh,

Speaker 1:

there you go. While he does that, tell me what you think of the word geo. Is that too buzzwordy? Do you like that term? What are what are the pros and cons of having a term applied to your industry, your nascent business plan?

Speaker 3:

I think geo sucks as a as an acronym. It's just bad in so many ways. I think, you know, it it can't be claimed by the because of, geography. It it it is already taken. Mhmm.

Speaker 2:

Oh, yeah.

Speaker 3:

Stands for, yeah, Generous Engine Optimization, which I don't people don't refer to these products as have you ever heard anyone refer ChatGPT as a generative engine?

Speaker 1:

They don't. Yeah. You're right. They do is a search engine for sure. Bing is a search engine, but no one calls

Speaker 2:

What GeminiAir? Preferred what's your preferred acronym?

Speaker 3:

I mean, sort of without, you know, said sort of with with not much passion, Answer Engine Optimization feels more fitting to me. I I I think with this still all to be determined. I think how your brand is spoken about by AI Mhmm. Will become the most important primitive in marketing. So I think it it is going to become bigger than just a kind of, you know, something that you you put a label on like that.

Speaker 3:

I think we we see a new a sort of new type of market forming over time, which is interesting. The marketing engineer, you know, the the a a a marketer who who has the technical chops to be able to go in and build agents, customize agents, deploy agents for the rest, you know, cross functioning, know, across teams. And, yeah, we I I think that's very interesting. We we announced a profound university today. So does this work if I press that?

Speaker 3:

Can you can see it?

Speaker 1:

There we go. That's a leap.

Speaker 2:

That's an leap. Oh,

Speaker 1:

wow. Looks amazing.

Speaker 2:

Glasses and sashes. Series c.

Speaker 1:

You're oh, yeah. Yeah. How much did guys again?

Speaker 3:

Yeah. So we announced a profound university

Speaker 1:

Cool.

Speaker 3:

Today. Very cool. Which is yeah. It's it's actually really it's really awesome. It's a series of certifications, training cohorts, learning materials that essentially enables the this new era of the marketing engineer.

Speaker 1:

Sure.

Speaker 3:

And, yeah, we're we're we're very excited about that. So, yeah, I think that we're gonna see a lot changing in the world of marketing as I guess is true in most industries.

Speaker 2:

Is saying that Ayo is a good Ayo.

Speaker 1:

Ayo. Ayo. Sheesh. Well, you so much for coming on the show. Always good to have you, James.

Speaker 1:

Congratulations.

Speaker 2:

It's great to see.

Speaker 1:

And we'll talk to you soon.

Speaker 2:

Thanks, James.

Speaker 1:

Have a good rest of your day. Let me tell you about Cognition. They are the makers of Devon, the AI software engineer. Crush your backlog with your personal AI engineering team. And we have Scott Wu from Cognition in the waiting room.

Speaker 1:

I wanna talk about the launch today. I wanna talk about AI progress. I wanna talk about math and your predictions on the IMO gold medal and everything that's happening there. But let's start with the the the general update on Cognition. What's the shape of the business today?

Speaker 1:

And then I wanna hear about the latest launch.

Speaker 13:

Awesome. Yeah. What's up, guys? How's it going? Great to Yeah.

Speaker 13:

Great to see you.

Speaker 2:

It's a

Speaker 13:

little bit, I feel like.

Speaker 1:

Yeah. Too long. Too long.

Speaker 2:

You guys have been cooking.

Speaker 13:

Yeah. Yeah. Every every month feels like feels like a decade now in AI. So Yeah. Cool.

Speaker 13:

No. So so things have been great. I mean, business has grown a lot. You know, we shared some of our metrics today, one of which is that our our total enterprise usage has actually more than doubled in the last six weeks even. Woah.

Speaker 13:

Six weeks. Lot of that has just been been mass take off of agents. You know? I I think the high level that we that we're really seeing is that as agents get more capable and you can trust them to do end to end tasks Mhmm. What you really need is the the full background cloud agent.

Speaker 13:

Right? And so that means, you know, being able to run your your repos and everything locally, being able to test, being able to spin things up from, you know, Slack or Linear or GitHub or Jira or whatever it is and just being able to have this mass parallel, async workload.

Speaker 1:

Okay. And then and then the announcement today?

Speaker 13:

Yeah. Yeah. No. The announcement today was a was a fun one for us. It was, you know, very near and dear to my heart.

Speaker 13:

But but a lot of it, honestly, if if I were really to just describe it in one line, it's just clearing through all the frictions that that we've known about and and just making it a really great experience. And so, you know, one of the big highlights is is automated testing and having Devin run your web app for you and send you the changes and send you screen caps of all of those things. But but there's tons of little things that that that really affect the experience. And so, you know, making the the VM startup time way faster, making the Slack integrations way smoother, you know, showing you all of the intermediate progress of the messages, and so on. And so it's been I mean, it's it's changed our internal usage a lot.

Speaker 13:

And so that's why we're pretty excited to get this one out.

Speaker 1:

How so so it seems like there's there's speed to be squeezed out from, VM spin up time optimizations. We're also seeing some incredible progress on the custom silicon. We had the founder of Thales on generating 16,000 tokens a second. That seems like that will be really impactful when it rolls out to the broader code generation and software engineering world. It's still pretty early with that company, Lama three b at this or three eight eight eight b at this point.

Speaker 1:

But where else are you seeing opportunities for speed? How do you think about the importance of speed for what you do?

Speaker 13:

Yeah. No. There there's a ton that you can do. And at some point, a lot of it actually is just good old software engineering. Really?

Speaker 13:

And so so, you know, it's it's it's it's, of course, like, you know, the the models, obviously, you know, you can improve the tokens per second. You can you can improve the TTFT. I think those improvements will be great, and we've already seen a lot of those over the last bit. We'll see many more. But at some point, you know, your agent has to go install, you know, NPM install.

Speaker 13:

Your agent has to go UV install. Your agent has to go grep for things. Right? Yeah. It has to go pull up the front end itself.

Speaker 13:

A lot of that stuff is is is good old product building and software engineering to to make that better and more efficient. And so in a lot of these, obviously, you know, you can do algorithmic tricks. You could put in, you know, indices, right, and indexes and and and make those faster. You can do little things to to kind of, like, cheat the loading time and and do things in parallel and do things async. But a lot of it is just building the systems around the agent to make it really fast.

Speaker 1:

No. It's a really good point. I mean, anyone who's installed OpenCLaw has experienced, like, oh, wait. I'm actually just waiting to download software because it's pulling a whole bunch of stuff together. And it's not actually doing that much waiting with the LLM, at least in the setup phase.

Speaker 1:

But you still have to actually get this thing configured, and I think a lot of people in tech went through that. How have you been processing lessons from Open Claw interaction patterns that you think are interesting? What it means that society more broadly is just aware of AI agents, which I feel like is a term that you basically coined years ago and have have been running with. But in a specific, like, enterprise context and now I'm at a bar, and I'll hear somebody talking about AI agents, and it's because of OpenClaw. And I feel like, oh, that's a I remember the Scott Wu launch video where he explained that this was gonna happen.

Speaker 1:

But but how have you been processing OpenClaw? What is interesting about that? Are are there any, like, lessons from that open source community, that project that paradigm that you wanna bring to Devon?

Speaker 9:

Yeah. No. I mean, a lot

Speaker 13:

of big changes, and I I think, by the way, I think OpenCloud gets a lot of credit for for for many people being the first time that people really saw

Speaker 1:

Yeah.

Speaker 13:

What what a full, you know, agent would look like with access to your files, access to your computer, and so on. I I think we're really getting to the point, you know, to to your previous point where I I think we're we're really starting to switch over from the early adopter cycle to the the kind of mass market cycle is my sense. And and and the concrete impact of that is a lot more people are starting to hear about and really think about AI agents. Right? And I think it used to be I mean, for us, for example, a year and a half ago, you used to go into the into the room and explain to people what an AI agent was and why this wasn't you know, why why this was different from from just, like, normal auto complete or ChatGPT or something like that.

Speaker 13:

Now everybody's thinking about this stuff. Everyone wants to use it. And I think one of the the kind of implications of that is just accessibility and getting people to value as soon as possible is is one of the most powerful things that you can have in your own products as a result.

Speaker 2:

You guys have had a ton of success in enterprise. The chat wants us to ask for your take on the SaaSpocalypse, I imagine. Yeah. Some of the conversations that you're having with, let's say, the CTO of a of a a massive company. Are they thinking about using a Devon for things like, big database migrations?

Speaker 2:

Like, how are they thinking about how agents can impact their dependency on sort of these legacy tools and systems of record?

Speaker 13:

Yeah. I mean, there's the whole security report and everything. I mean, it is honestly ridiculous. That that's that's my that's my two cents on it. I I think the, look.

Speaker 13:

At at a high level, of course, yeah, AI is gonna change a lot of stuff. I I don't really understand how you go from that to saying that there's going to you know? Like, take software as a good example. Software is one of the most deflationary things ever. You know?

Speaker 13:

A lot of the same products that that, you know, used to cost much more ten, twenty years ago got have gotten much, much cheaper over time. Right? Has this been terrible for a software company? You know, I mean, it it seems like it's been pretty good. All the big companies in the world are still software companies.

Speaker 13:

Right? Yeah. And and and so I I think there's, like, there there's one thing when prices go down because, you know, the demand's just not there anymore. And, obviously, you can get into weird cycles, and all that can happen. But it's a totally different thing if prices go down because we've just gotten way better at supplying things.

Speaker 13:

And that's when you get Jevons paradox, and that's when you get, you know, just mass consumer surplus and so on. And so at a at a high level, know, I mean, I think there's all the customers that we work with, you know, banks and health insurers and private equity and and so on, I mean, they're they're they're obviously like a lot of these base migration, modernization projects that they can go and take on immediately. But the very next thing that they say is then like, okay. How do I pull the rest of my roadmap for? Right?

Speaker 13:

How do I build even more and getting more out to people? And and I think the reality is we just we all just have so much more software to build.

Speaker 1:

Yeah. How are you thinking about AI progress broadly? It feels like a lot of people are feeling that recursive development is on the on the horizon. People are bringing up takeoff speeds again, migrating from slow, maybe fast. I was backing off.

Speaker 1:

Now it's a little quicker. How are you how are you trying to, like, zoom out, reset, get to reality, figure out how fast things are actually moving.

Speaker 13:

Yeah. No. I mean, the the the meter report shows, like, the consistent doublings and everything. I I mean, I think it's a very I I think things are continuing on the exponential curve. I wouldn't say that they're going either super exponential or sub exponential.

Speaker 13:

I think they're they're roughly going on that exponential curve, but, you know, exponential curve is a lot. Yeah. Like, that's that's a very fast growth, obviously. I I think for us, you know, one one of the things that's been pretty interesting is just, like, noticing each of the step function changes that happen. And so for us, for example, it's definitely been in the last, I'll call it, like, four or five months Mhmm.

Speaker 13:

Where something interesting happened, which is we stopped typing code. You know? Like, at some point, you you just don't right? Like like, before, obviously, you have all the tools, and you you have the combination of things and and and so on. Now it's there's different experiences.

Speaker 13:

There's different tools that you wanna have, obviously, between the IDE and the CLI and and and the web agent and so on. But either way, you're you're really just working in prompts, and you're not really, you know, like the code that we check into GitHub, like, how much of it was typed by a human at this point? I I think almost none.

Speaker 6:

Yeah.

Speaker 13:

And and and maybe one of the things I would just call out is that that, you know, as you kind of expose each new thing, like, I mean, if you think of it as, like, a a profiler, you know, on on your own software engineering workflow, like, what is the most expensive part? You you shrink that down. You get to the thing. You shrink that down, and you just make the whole cycle more effective. We're at the point where a lot of these other things like, you know, understanding the code base and review and so on are the actual bottlenecks.

Speaker 13:

Right? Testing is another big one. And I think what we're gonna see over the next next little bit is you you're basically going to have to solve each of those with really good product experiences, really good model capabilities, and so on. So may maybe the only thing that I would say, you know, I think the exponential curve continues. I I would just kinda call out that the form factor looks very different as you continue on that exponential curve because you're actually solving different problems.

Speaker 13:

Like, yes, I think we will continue to to to kind of, like, you know, get the doublings and the doublings. But but now it looks a lot more like how do we optimize testing and review and planning, not how do we make the AI good at writing code based on the prompt that you give. Because at this point, it's actually frankly, it's it's basically already done.

Speaker 1:

Yeah. I have a bunch more questions. I'll be quick. I have two. First, what is the future of Windsurf look like in a world where you're not writing code?

Speaker 1:

Does that become a Kindle? I mean, that's a joke, but does it become does it become more important for that product to be the best way to read code? Because even if you're not writing code, you still like, I've I've done terminal prompts, and I'm like and then I wind up opening the files to kind of take a peek in them. And I'm like, oh, I kind of, like, there there's maybe room for innovation there and, like, how your Yeah. Code reading skill improves as your code writing skill degrades.

Speaker 1:

But how do you think about the future of Windsurf?

Speaker 13:

Yeah. For sure. I think the high level here is it's gonna be a gradual thing, but I think over the next one or two years, we'll have a pretty broad transition towards what you might call having English as the source of truth. Sure. And so so people talk about, like basically, I think we'll go from code to English in the same way that we went from assembly to code.

Speaker 13:

Yeah. Right? And and so so, you know, one of those steps has been been you know, has been mostly done at this point, which is the step of figuring out how do you prompt in English and then have the agent produce the code. But if you think about it, I mean, you're still you're still, you know, reviewing code. You're still checking code into GitHub.

Speaker 13:

You're still reading the code to understand what's going on. And I think at some point, you actually want an interface that looks a lot more like a spec or a map or

Speaker 1:

Sure.

Speaker 13:

You know, like a a design doc. Right? And that's the thing that you're iterating on. That's the thing that you're reviewing, for example. Right?

Speaker 13:

I think at some point, review I mean, people say, oh, like, review is gonna go away because AI is gonna catch all the bugs. I think that's actually not right because what you're gonna be reviewing is the decisions. Right? It's like, here's what we're here's what the product is doing in this case, and here's what the product doing in that case, and, you know, here's how, you know, this plan works or or whatever it is. Right?

Speaker 13:

And so what you'll wanna have is a very clean interface to interact, basically, you know, with with your product and with your own specs and so on. And and that's what a lot of what we think Windsurf evolves into over time. Right?

Speaker 1:

Mhmm.

Speaker 13:

And so so, again, I think it's a very gradual thing. I think I think I think there's a lot of value in reading code now. And certainly at Cognition, we still do a lot of reading the code even if we are not the ones, like, you know, writing the next line of code because instead we we write the English prompt for that. But but but I think what happens with Windsurf is, you know, at some point, instead of looking at each of the files, you start looking more at, you know, this the specs and the high level logical design of what you're building. You start looking at the, you know, the diagrams of your app or your your website itself, and you're able to go and manipulate those.

Speaker 13:

And you're really just managing your agents that you kick off from there.

Speaker 1:

Okay. We blew past the I OI gold medal, as you predicted, correctly. Amazing. I have, like, a follow-up question about that, but it's sort of in three parts. One is what is the next, like, math or physics based benchmark that you're excited about AI potentially unlocking?

Speaker 1:

When do you think that might happen? And then do you think there will be any tangible impacts of that? Because if I walk down the street and I tell some random person, like, they they did it. Navier Stokes is solved. I think that's the math problem that everyone talks about a lot.

Speaker 1:

I don't even know. I I I think most people would be like, great. Like, is that gonna help me with my job? Like, they're more excited about just knowledge retrieval right now. Yeah.

Speaker 1:

So so, yeah, the the the next hurdle timeline and then impact.

Speaker 13:

Yeah. Yeah. For sure. So, I mean, we actually I I would say crossed a a pretty exciting hurdle just recently. Mhmm.

Speaker 13:

Like, and Soska and some of the folks at OpenAI had a pretty important breakthrough in physics where they used language models to figure out a lot of the the the key lemmas and theorems for it. Mhmm. And so so, you know, I I would have said, I think the the next big breakthrough is is getting to a point where, like, actual science and actual discovery is happening largely powered by AI. And I think we're we're effectively getting into that. I think we'll see much more of that this year.

Speaker 13:

I think to your point on impact, yeah, I I I think it'll be some time until, you know, the the the average person feels the impact of us proving new theorems. But Yeah. Obviously, the long term of all of this is extremely powerful. Right? I mean, we're gonna be discovering new medicines.

Speaker 13:

We're going to be, you know, unlocking big breakthroughs in biology, material science, nutrition, and so on and so on. And all of this is, you know, comes from a lot of the same science. I I think I very much think of it as a Yeah. As as you can call it like a, you know, a a proof of concept Yeah. Or or like an existence proof that it

Speaker 1:

is

Speaker 13:

possible. And and, you know, solving some of these very difficult novel math and physics and algorithms problems is there there are lots of ways that over time that itself will continue to be valuable, but but even more so than that, it's obviously just, you know, an existence proof that that AI can do some pretty incredible things.

Speaker 1:

I love it. Oh, yeah. A lot of people get, abstract with the medicine science. I like the material science one because I can imagine a much stronger, much cheaper, much lighter carbon fiber and driving a car that's pure carbon fiber for the same price as a Model three, this is pretty attractive. That's pretty tangible.

Speaker 1:

I think the average American consumer is gonna get behind

Speaker 2:

I'm excited about that.

Speaker 1:

Get extremely

Speaker 13:

excited about the part of, you know, you you have the best pizza that you've ever tasted and except it's also the most nutritious thing for you because we've just solved taste and nutrition and everything. And I feel like AI will get us there, but that might require a few more. There'll probably there's probably a few more steps in the middle.

Speaker 1:

That's that's the new AGI benchmark.

Speaker 2:

Get the goalpost. Get the goalpost. I'm moving the goalposts.

Speaker 1:

AGI will be here when I can have a pizza that tastes amazing and also is fully nutritious. Thank you, Scott. Will. Have a great day.

Speaker 3:

Cool. Great to see you guys.

Speaker 1:

Always fun to move the goalposts with you. We'll talk to you soon. Goodbye. Okay.

Speaker 3:

It's an honor

Speaker 6:

to the post.

Speaker 1:

MongoDB. What's the only thing faster than the AI market? Your business on MongoDB. Don't just build AI. Own the data platform that powers it.

Speaker 1:

And without further ado, we will begin our Lambda lightning round with Rune. Look at this new effect. See. Oh, yeah. We're getting new effects going.

Speaker 1:

Welcome to the show. Oh, look at this.

Speaker 2:

What's happening?

Speaker 1:

That is a beautiful

Speaker 5:

Thank you.

Speaker 1:

Lighting setup. Thank you for joining. First time on the show, please introduce yourself and the company.

Speaker 5:

Yeah. Great to, good to be here. I'm Runek Frist. I'm cofounder and CEO of the artificial intelligence underwriting company.

Speaker 1:

Okay.

Speaker 5:

Our mission is to underwrite super intelligence, and we do that by building standards and insurance products for our AI agents.

Speaker 2:

Okay. Sounds extremely straightforward and simple.

Speaker 1:

Yeah. Plenty of data to build this on. I mean, yeah. How do

Speaker 2:

you even think about it? The big thing Derek Thompson was kind of summing up the whole discourse around Suttrini. And his takeaway was that everyone can agree that no one knows what's gonna happen. Mhmm. So very difficult difficult environment to be creating insurance products for.

Speaker 2:

But I'm sure you're narrowing it down to some key initial use cases. So maybe you can talk about where this starts.

Speaker 1:

Yeah.

Speaker 5:

Maybe the first thing to say is that regardless of whether anyone buys an insurance product, someone is always underwriting it.

Speaker 1:

Mhmm.

Speaker 5:

So otherwise, it's just gonna be the, say, head of risk at JPMorgan who has to make a go no go decision. Mhmm. He also sits with the same problem. Is this gonna work, is it not gonna work? So the place we start is just what are the risks that are slowing down adoption today?

Speaker 5:

Mhmm. And can an independent third party with skin in the game and visibility across a bunch of companies be able to underwrite that better than any particular head of risk chief security officer might be able to do. Mhmm. And like any other risk, when there's no data, there's an initial r and d phase where we don't expect all of these policies to work out well. We expect to lose some money and in the process start to be able to collect the data that allows us to underwrite this more precisely than anyone else.

Speaker 1:

Yeah. Walk us through some of the some of the example insurance policies because, I mean, everyone who's followed, like, the AI story and AI race has seen, like, a million different varieties of impairment from, like, the training run didn't work or the data center was delayed, and that has a financial impact down to we got sued because of our training data or someone used our app and didn't like it. There's a million different ways that you can have smaller or even large settlements or lawsuits. But what what how do you think about fragmenting the market, finding a a landing zone, a beachhead?

Speaker 5:

Yeah. Totally. So you start from what are the very real concerns to slow down adoption today.

Speaker 1:

Mhmm.

Speaker 5:

Let's take one. We just announced the world's first insurance policy for any agent last week with the Lemon Labs. They are trying to be on the frontier. Thank you. They're pioneers of security and and safety.

Speaker 5:

They're trying to be on the frontier of giving assurances. The things that hold up adoption for them are things like hallucinations that lead to financial losses. So I wanna see in the kind of Canada example leads to financial damage. Mhmm. Data leakage continues to happen.

Speaker 5:

You'll see on a weekly basis. OpenClaw is the latest group of that. You don't want your agents to give medical advice.

Speaker 1:

Sure.

Speaker 5:

And so those are also some of the kinds of things that are covered. So mostly at the application layer today, and then we think as ensure appetite grows, eventually, our mission is to underwrite superintelligence. Eventually, we think some of the kind of risks that look a little bit more like private nuclear energy will also have to be covered by insurance because these risks cannot sit with no one. There there's a grand compromise in 1954 that allowed us to do private nuclear energy in America, which is the Price Anderson Act, especially the government saying, hey. We really want some private nuclear energy.

Speaker 5:

That'd be awesome. But, also, any particular private company cannot carry the risk if something truly goes wrong. Mhmm. So we're gonna require an insurance scheme that's gonna be our way of putting the market to work to manage this in a way that's just pro business, pro getting this adopted.

Speaker 2:

Mhmm. And the government has always effectively been the insurer of last resort in some ways. Right?

Speaker 5:

Whether it's formal or not, the government is always the insurer of last resort. Take COVID. Who's on the hook for that? Well, ultimately, the government has to step in. So the question is, can you formalize that a little bit more and say, at what limits of liability is the government on the hook?

Speaker 5:

And up until that, who's on the hook for that?

Speaker 1:

Got it. Okay. So walk us through the chain of how insurance actually works. I understand eleven Labs comes to you. And then are you drafting a policy with a specific risk profile, payment pros premiums, and then you're going out to the JPMorgan's of the world and having them buy that and or invest that?

Speaker 1:

Does this float? Is this tradable? Can a retail investor get allocation? How does that work on the on the long tail of the financialization?

Speaker 5:

Yeah. Eventually, this will end up on Robinhood, but let me walk you through how it looks today. Yeah. So today, there are two steps, high level. First is certifying against the standard Mhmm.

Speaker 5:

As a way to unlock insurance. So historically, the way every month has been unlocked is that the insurers need to know that the risk is well managed. Mhmm. The head of risk JPMorgan doesn't want just financial coverage. He wants to make sure that there's no incident that gets him fired in the first place.

Speaker 5:

Yep. And so we've developed a standard. It looks a little bit like a Moody's framework or a SOC two. Mhmm. So that is all open source and public.

Speaker 5:

It's 50 requirements that any AI frontier company must meet to meet the standard. And as part of that, we run a bunch of technical tests, basically crash testing, red teaming, as you might call it here, which gives us a score. And we give them pass fail and certificate, and then the score feeds into a policy that we've designed with some of the leading insurers where a company like Eleven Labs gets to specify, hey. What are the top three, four, five risks that hold up adoption?

Speaker 8:

Yep.

Speaker 5:

They buy a policy for that. And today, that that risk is helped by traditional insurance companies. Again, this is actually all about trust. So you really want the old insurers to have it on their balance sheet. They always pay.

Speaker 5:

Over time, as we move into this kind of, like, Chernobyl types of risk, we will run out of private capacity. We will have to, at some point, create catastrophe bonds. Those will be traded on the public market, probably not on Robinhood, but but more sophisticated investors. That is the ultimate way to build enough market capacity to cover the tail risk.

Speaker 1:

Yeah. Makes sense. Very, very fascinating. What does the business look like today? This feels like high stakes work, but is it capital intensive?

Speaker 1:

Is it do you need a thousand insurance agents at some point? Like, what's the team like? What's the fundraising like? What's the business like?

Speaker 5:

Yeah. Totally. So the way to on if you're really thinking about this long term, the way to unlock the insurance market is to get the standard universally adopted.

Speaker 8:

Mhmm.

Speaker 5:

And that is kind of what allows everyone to say, hey. This risk is well managed. We can now start to to price it.

Speaker 1:

Mhmm.

Speaker 5:

And so we have, for the standard, we have about a 100 security leaders from the one Fortune 1,000 who meet with us every six weeks to input into the standard that's representing their interest, and now you're having some of the leading AI companies like Eleven Labs, Intercom, UiPath, more to be announced soon that have set put themselves forward to say, hey. We're pioneers. We'd like to have an independent audit to prove that. That's step one. So that's what most of our work is focused on today.

Speaker 5:

Then on the insurance side, the the way to start is to partner with existing insurers that bring that trust credibility. They're frankly so old school, and that's what that's what brings trust here. They don't take they're not calculus. That's the whole point.

Speaker 2:

And it'll work fine. Really it doesn't really work if you're it's like, okay. Who's actually backing this policy? And then it's like, oh, a company created few months Yeah. Exactly.

Speaker 2:

You're like, don't worry.

Speaker 5:

I got it. So it's actually quite Kevin to get started. We raised $15,000,000 from Ned Friedman last year.

Speaker 2:

And Almost ran into the the goalpost.

Speaker 5:

You go.

Speaker 2:

Move them again.

Speaker 1:

Well, thank you so much for stopping by the show and giving us the update.

Speaker 2:

A lot a lot more questions. As there are new kind of crises around agents, feel free to pop back on Yeah.

Speaker 1:

We'd love to.

Speaker 2:

Talk about it.

Speaker 1:

Amazing. We'll talk to you soon.

Speaker 13:

Meet Arun.

Speaker 1:

Good to meet you, Let me tell you about CrowdStrike. Crowd your business is AI, their business is securing it. CrowdStrike secures AI and stops breaches. And without further ado, we have Rainier Pope from Matt X in the restroom waiting room. Welcome to the show.

Speaker 1:

How are you doing?

Speaker 2:

What's going on?

Speaker 4:

Doing great. Very happy to be here.

Speaker 1:

Thanks so much for hopping on. It's your first appearance. We'd love an introduction on yourself and the company to kick it off.

Speaker 4:

Yeah. So happy to be here. I I I'm Reiner. I'm CEO and one of the founders of of MadX. We we are a company that makes the best chips physically possible for large language models.

Speaker 1:

Okay.

Speaker 4:

So we've been doing this for about two or three years. Before that I was myself, I was at at Google for about a decade working on large language models, worked on the TPUs for a bit, on some other hardware projects. And really as part of that, what we saw was that there was this like, if you really want to make the best chips for LLMs and LLMs or this big up and coming workload back in '22, if you want to make the best chips for LLMs, you need to do really the best way to do it is from a from scratch blank slate design. So designed for large matrices, very low precision, very low latency. And so my cofounder Mike Gunther and I at that point in in '22 decided to leave Google to start MATx where we're doing exactly that.

Speaker 4:

Mhmm. The day we're announcing Please. MATx one, this is our this is a new chip which simultaneously offers better throughput per square millimeter or throughput of a chip than any other product in the market, while at the same time offering lowest latency latency that is comparable to the best, is Croc and Cerebras.

Speaker 1:

Yeah. What are the various trade offs in custom silicon design these days? Is it is it just I mean, at the highest levels, flexibility and speed or cost, size, wafer size? Like, how do you think about the design space? And then I wanna know how you actually narrowed it on your particular decisions.

Speaker 4:

Yeah. So generally, there's some kind of performance per something. And so, like, let let them analyze those pieces. Like, two different aspects of performance are what is the throughput and what is the latency? So how many users simultaneously can I support is throughput and then latency is for one user how fast is the experience?

Speaker 4:

Both of those matter. And then on the like the per per something, like per per dollar, how much does the chip actually cost? And then per watt, which is like what is the power bill of the chip. But those are the like all combinations of those two numerators and two denominators are the things we care about. Mhmm.

Speaker 4:

What we see in the market today is that the the number one constraint is just the throughput per dollar and the throughput per watt. So I these frontier labs have so much demand for compute serving all of these trillions of tokens per day. And so they the cost and and the economics is the main constraint. There's only so much so many square millimeters of silicon wafer being produced every year. And so given that constraint on how much silicon there is, can we maximize the number of tokens and then maximize the intelligence of the models coming that way from it?

Speaker 2:

What does does the go to market look like? Are you already sold out? Like, who's who's who who are you kind of targeting early on? How do you scale? All that stuff.

Speaker 4:

So so one of the places where we've seen the most interest in our product is from Frontier Labs, really. And and so this is coming from a combination of the the they are the ones who are driving really all of this demand and are so much constrained on cost as well as silicon wafer supply. But then also they are the ones who are who are doing these reinforcement learning training workloads, which are very, very latency sensitive. They have to roll out long rollouts in a very long

Speaker 1:

Yeah.

Speaker 4:

So that's where we've seen the most interest. The one of the things that shows up there is that when they are looking to make place an order, is an order on the order of gigawatts or something like that, which is which is massive volumes. So one of the things that we're actually very excited to be able to do now, with this raise that we've just, announced is, is is help ramp up the supply chain in order to be able to deliver, you know, gigawatts a year of of volume, which is which is a massive volume to be able to deliver.

Speaker 2:

Yeah. When you were initially thinking of starting the company, pitching it to investors early on, how did you answer the question around NVIDIA's various moats Mhmm. Or kind of strategic advantages, you know, think CUDA Yeah. All that stuff.

Speaker 4:

Yeah. I I think it's really interesting. Like CUDA is for NVIDIA simultaneously the biggest strategic advantage and also a constraint. Because their promise that they make you is that you can take a CUDA program written ten years ago, and it will run on the next generation NVIDIA GPU. Jensen goes on stage and promises this.

Speaker 4:

It is so valuable for them. And yet at the same time, it means the next generation GPU has to look just like the GPU from ten years ago. So so it means things like the numerics can't change, the way the cores in the chip are connected to each other can't change, the the the actual memory architecture can't substantially change. All of these things are kind of locked in by the programming model that they designed more than a decade ago for for general purpose parallelism. And so so this is where we've seen the biggest differentiation.

Speaker 4:

If they wanted to say, well, we're gonna, like, completely give up our CUDA approach and and start a new generation of chips, maybe they could do that. They would lose all of this lock in that they have, but then at least they will be on a level playing field with us. But but that's not what we see. Really, we see them being committed to to their their trajectory. This the the CUDA lock in is very valuable for for sort of the mid and tail of the market where people are so sensitive to the software cost.

Speaker 4:

But really at the head of the market in the frontier labs, the the software is not the main cost. The hardware is the main cost. And so if you're willing to rewrite your software, maybe you can you can actually switch to a more efficient hardware

Speaker 5:

like us.

Speaker 1:

And it's getting easier to rewrite software.

Speaker 2:

As you plan your business, how are you thinking about bottlenecks, know, one month it's energy, the next month it's chips, then, you know, a lot of concerns around TSMC right now. How are you how are you kind of planning?

Speaker 4:

Yeah. So I mean, I think the these bottlenecks are real and they're gonna stay for a long time. Mhmm. What like, the the big bottlenecks that you see in the manufacturing supply chain are on logic dies from TSMC and then memory dies from Hynix, Samsung Micron, and then and then manufacturing, so of racks and so on. Given that these bottle given these bottlenecks exist, what you would like to do as as a consumer of of of such things is you wanna get the most bang for for your buck.

Speaker 4:

So the the most performance out of every square millimeter of silicon. That is what has been our focus. We the the FLOPS per square millimeter, the four bit precision multiplies you can do per square millimeter of silicon is higher in our product than any other product. And so, you know, as the price of every silicon wafer goes up, you can do more with it on on our solution inputs.

Speaker 1:

I I assume you're on the most I mean, you you've mentioned this. You're you're selling to the frontier labs, running frontier models probably on the most leading edge chips, the most leading edge fabrication nodes. Is there a world where it's valuable to say, hey, we have some lagging edge capacity out there. What if we go design custom silicon that runs on the last generation Intel node that's not the line out the door for capacity, and then I'm not competing with you? Does that not work?

Speaker 1:

Is that not possible? Or is that just a completely orthogonal business to what you're building? So

Speaker 4:

That that approach is possible. Mhmm. It it's it is maybe more of an approach for a a player with rich pockets rather than a startup.

Speaker 1:

Sure.

Speaker 4:

In in that, like, every different process you target, it costs you another $20.30, $40,000,000 of development cost. Yeah. And so if you're gonna bet all of your eggs on like, put all of your eggs in one basket, you should put it in in the leading edge node.

Speaker 1:

In the best basket.

Speaker 4:

Yeah.

Speaker 1:

That makes sense. Talk to me about other trade offs at TSMC. I mean, Cerebras is famously wafer scale. How what is the trade off on, like, size of die these days?

Speaker 4:

Yeah. So I mean there's there's a trade off of size of die and then also of memory architecture. Okay. So size of die, cerebris is the outlier. Yeah.

Speaker 4:

Almost everyone else has converged on reticle scale, is the largest sort of standard leap produced Got TSMC chip. We're in that same category of like, we're we're about this we're in the standard bucket there. Okay. It it that avoids a lot of the physical risks that, you know, when you look at Cerebras, they've had to spend all of this time on dealing with just like bending and and all these uncomfortable physical constraints that we don't want to deal with. Yeah.

Speaker 4:

So the reticle size chips, but then the other bigger thing is which memory technology to use. The historically, there's been like the HBM based players. That's Google, Amazon, Nvidia. And then there's been the SRAM based players, are Ceribris and Grok. Mhmm.

Speaker 4:

SRAM is small but very, very fast. And so the very, very fast is good if you want to run low latency. You can put your model weights in SRAM and and you get the best latency in the market. That's what GRAW and Cerberus have done. Mhmm.

Speaker 4:

But the reason they haven't, like, sold out in the market is because there's not enough space in the SRAM to to store all of your long context KV caches.

Speaker 10:

Sure.

Speaker 4:

And so one of the things that like, this is the reason why the HBM based players like Google, Amazon, Nvidia have won is because of like, the HBM is actually essential. Mhmm. But it's actually possible to marry both of these approaches and put them in one chip. And Mhmm. And that is what we're doing with MATx one.

Speaker 4:

Okay. And so it curiously, I mean, it doesn't just give you the best of both worlds. It actually beats any alternative on throughput. Mhmm. There's this curious effect where when you have your weights in SRAM, you can actually get better mileage, better usage of the HBM in return.

Speaker 4:

And so there's we think this is actually the the the way where the market in general will move over time.

Speaker 1:

Okay. Help me understand the trade off continuum of around flexibility of model. I mean, imagine on NVIDIA NVL 72, I can sort of run any model as long as it fits and works and is trained properly. And then Thales is like, these specific weights on the chip, you can never change them whatsoever. And then there's something in the middle.

Speaker 1:

How much flexibility do you think is important? How much flexibility are you planning around? And how and how do you think about sight lines? Because I imagine that the delay between, like, the final architectural design to chips in data centers is still a year, eighteen months, something

Speaker 3:

like that.

Speaker 4:

Yeah. I mean, there's all of these manufacturing and then deployment times that Yeah. That that make it take

Speaker 10:

a long time.

Speaker 4:

Yeah. In general, I would say that from from sort of pencils down on on chips to, like, when is the last time you're using it? The chip is gonna be in the data center itself for, three to five years, and then there's maybe, as you say, a year a year and a half of of deployment time in advance of that. So you want your chip to be relevant for for a five year time span maybe. The so you need to point pick a point of specialization which you think is here to stay.

Speaker 4:

Yep. For us, that is very large matrices. Okay. And then in fact, really large matrices together with a splitable systolic array, which is a piece of technology. But very large matrices is the the the theme that we started with, and this is just a recognition of over time models have been growing.

Speaker 4:

They grew a ton with LLMs and they're continuing to grow. And if you specialize for that, you can get big efficiency wins on the matrices themselves. Yeah. Now we are still very general purpose programmable in terms of the vector unit. Similar to NVIDIA, we have this vector unit that you can run any instruction on, like add, multiply, subtract, divide, all of those And so that gives you the like, it's it's trying to it's trying to put a good amount of flexibility but in a in a way that only costs like 10% of the the cost of the chip overall.

Speaker 2:

Okay. Yeah. Funding news.

Speaker 1:

Give us the news.

Speaker 2:

Yeah. What happened?

Speaker 4:

So we're happy we we we have raised $500,000,000. This this was around

Speaker 2:

guests. Who'd you raise it from?

Speaker 1:

Thank you.

Speaker 4:

The way this was led by Main Street situational awareness. So situational awareness, that's Leopold Ashen Brenner's fund. Yeah. He

Speaker 2:

If you've been living under a data center, that is that is that's his Exactly.

Speaker 4:

You you might have heard of him. So he really sees just like the big picture of where this this space going, and and he recognizes, like, just how much demand there is for silicon. And then on the other end of the spectrum, Jane Street, they are expert technologists. They know everything about what exactly is required to build a product like this, and they they know what good is in a product like this. Mhmm.

Speaker 4:

So we're we're really happy to have these, like like, strong experts here. This sort of mirrors what we see inside the company as well. We have a wide range of experiences across hardware, software, and ML. And then even in in the rest of the investors who are participating in our round, we have, like, renewed participation from from our previous investors. This is Spark Capital and NFTG, as well as a range of folks such as the Patrick and John Collison, experienced ML people like Andrei Kapathi Wow.

Speaker 4:

And then even participation from the supply chain like Marvell and MLJ.

Speaker 1:

I Did you have any normies in? They're just the most elite people in the world.

Speaker 2:

We we like I mean,

Speaker 1:

just one mouth breather, please.

Speaker 2:

What? Congratulations. It's the hardest lineup.

Speaker 1:

It's the hardest lineup ever.

Speaker 2:

Ever heard.

Speaker 1:

It's amazing. I'm extremely excited for this. I'm excited for it

Speaker 2:

to Now you have to you have to you have to win on such massive scale. Otherwise, you'll bring dishonor to all the industry legends.

Speaker 1:

No. Thank you.

Speaker 2:

No. It's it's really cool to hear your perspective and approach to everything. And I'm sure you'll be back on the show this year. Yeah. So congrats to the team.

Speaker 1:

We'd love to have you back. You so much for taking the time. We'll talk to you soon. Goodbye. Let me tell you about vybe.co.

Speaker 1:

We're d to c brands, b to b startups and AI companies advertise on streaming TV, pick channels, target audiences and measure sales just like on Have

Speaker 2:

we have we had a

Speaker 1:

Royal Flush?

Speaker 2:

Line up like that before Royal Flush. The Jane Street situational awareness co lead, then just Has

Speaker 1:

to crew. Gotta flush that. Situational awareness is a new fund, has not led that many rounds.

Speaker 2:

I know. But I'm just saying you go

Speaker 1:

back Maybe it turns into into a spray and pray fund. You never know. Maybe Leopold says, yeah. I'm just gonna write $5,000,000 checks to every company. Who knows?

Speaker 1:

Anyway, we have our next guest in the roost room. Got Standard Intelligence. How are you doing?

Speaker 2:

What's going on?

Speaker 12:

Hey. I'm. Pretty well. How about you?

Speaker 1:

We're doing fantastically. Thank you so much for taking the time to come on the show. Since this is the first time on the show, I'd love an introduction on yourself and the company.

Speaker 12:

Yeah. So I'm Levonch. Co founder and standard intelligence. We pre train computer use models, basically. Okay.

Speaker 12:

So, basically, the thing people are doing is they're training, you know, on screenshots and, like, train of thought traces. And we're just like, what if you train purely on 30 FPS video?

Speaker 1:

Mhmm. What actually goes into the training data? Because there like, there's a lot that you can do on a computer. And Yep. I feel like if you've never trained on Ableton and it just comes randomly, like, are you actually gonna be able to learn Ableton from just playing in Premiere Pro and Word and, you know, Paint or something or Photoshop or whatever?

Speaker 1:

Like, how are you thinking about the transfer? And, like, what's actually in the training side? Actually, just zoom out and talk about the process in more depth.

Speaker 12:

Yeah. So we have, like, two splits of data where we have, like, you know, this small, like, contractor split. The thing that we did was we, like, made this app that people run on their computer, and it records their screen and, you know, logs all their key presses Sure. And and all their mass movements. And we're running that all the time.

Speaker 12:

And then we also have this, like, much, much larger kind of unlabeled dataset of basically every video that we could possibly find we're allowed to use on the Internet of computer use. And so, yeah, we we train the model to to label that big set from this, like, small contractor only set. And the goal is to just, like, be able yeah. Just, like, train on all of it and, like, train this kind of, like, general model that is that is able to to generalize to basically anything that you could do on a computer.

Speaker 2:

Are what what kind of limitations do you have on on who can install your software to capture that data? If I'm a if I'm a company, I feel like I have to have a pretty high degree of trust in in you guys to let my employees. Is that something that that's, like, a is more of, like, a partnership?

Speaker 12:

Yeah. So right now, it's, like, us. Like, we're recording our own screens all the time. Plus, like, we have some number of contractors, and we get to, like, pay them, you know, somewhat less because they're not doing, like, active work for us. It's more like passive screen recording.

Speaker 1:

Sure. Yeah. And then are you sitting on top of some sort of foundation model brain for reasoning chains and sort of, like, the LLM piece of the puzzle, or is this a model that's gonna look you're not right now?

Speaker 12:

No. We're not at all. Like, the the the model that we released is or I I suppose, like, demoed is, like, entirely trained on this kind of 30 FPS video in and, like, you know, typing and and mouse movements and and things like this out.

Speaker 1:

Okay. So how much think

Speaker 9:

it's a hard to say

Speaker 1:

it again.

Speaker 2:

How much longer will I have to fill out forms on the Internet? I've I've I've should try to estimate how many times I've entered the same information just over and over and over.

Speaker 12:

I think John John

Speaker 2:

Collison Collison talked about it on our show today and was talking about it on his own show. I think with Ben Thompson talking about, like, at what point can you just take a link and Yeah. Say, hey, please buy this. Yeah. And then it just does it for you.

Speaker 1:

Mhmm.

Speaker 12:

Like, pretty soon. I think, like, that kind of use case is just, like, under six months away depending on what, like, exactly you mean.

Speaker 1:

Yeah. I I mean, in terms of actual deployment, I imagine that this would be something I I personally would probably want deeper as more of, a tool that's called from a consumer LLM app. Is that how I'm is that I'm correctly thinking about this? Or do you think they'll actually be like, will you jump straight to a consumer?

Speaker 12:

It's like yeah. I I think in the short term, the kinds of people that are particularly, like, you know, cool to sell to are, like, know, mechanical engineers doing CAD Okay. Where, like, they can press the the tab button, like, software engineers press tab and cursor and have their next, like, you know, minute or two minutes of of manual work done. And and and we showed that in the the kind of gear extrusion demo Yeah. Where, like, have this gear and you're, like, extruding faces.

Speaker 12:

And that's just, a very, very common thing that you do in CAD. Mhmm.

Speaker 13:

And I

Speaker 12:

think there's, like, a more general thing where, like, yeah. You can think of computer use as, like, a tool call or you can think of it as, like, you know, just the thing that you do, you know, for knowledge work.

Speaker 1:

Yeah.

Speaker 12:

And I think we're just in a place where, like, we can scale computer use on its own. It's not impossible that we'll, like, initialize from LLMs or, for example, like, use text training to, like, make the model smarter in in text space so it can fill out forms better. But it is it is not the goal of the company that, like, you know, people have you you have Claude, like, call this as a tool call. The the goal is to just use your computer or, like, use its own computer just, like, in general.

Speaker 2:

Talk about your experiments with with self driving, and do does that does that work potentially apply to robotics more generally?

Speaker 12:

Yeah. So I think I think this general, like, pre training thing or, like, you know, labeling a bunch of unsupervised data with with actions and then training on that, like, labeled data, this, like, inverse dynamics thing works very or, like, I expect it to transfer very well to robotics. Self driving in particular, it was kind of so so Neil who works at SI was like, okay. We have this action model, and his friend had a comma. And so the there there's a comma, like, joystick mode where you can, like, control the the steering with with arrows.

Speaker 12:

And so we were like, okay. Well, if it's a general computer use model, surely it should be able to, you know, control a car because that's just, like, a thing that you do on a computer. It's like video in. You're you're seeing it on the screen, and then you can press the left and right arrow keys to steer. And, obviously, we originally didn't really expect this to work, and then it just, like, it's worked much better than expected.

Speaker 12:

We fine tune an hour of data.

Speaker 1:

This is a good sound.

Speaker 2:

On how on how many hours? Three hours?

Speaker 12:

One hour. Like, they're,

Speaker 9:

like, an

Speaker 12:

hour to one hour. Fifty fifty minutes.

Speaker 2:

That's crazy. And you're able to just fully the the the the system could just navigate around SF?

Speaker 12:

I mean, sorry, navigate around, like, South Park. Like, it's it's

Speaker 1:

Yeah.

Speaker 12:

You know, not general self driving model. I would not recommend, like Yeah. Sitting in this car and just, like, letting it do whatever it wants. But, yeah, it's it's it's pretty cool.

Speaker 2:

As I take the wheel. Don't make mistakes. Take the wheel.

Speaker 1:

We are we

Speaker 4:

are not a Tesla competitor. We are

Speaker 12:

not a we are not a way to at it.

Speaker 2:

Do think do you think the sport coat is, like, the next it apparel item in the set? Because it looks fantastic here. The chat loves your sport coat. And I just feel like that could be the middle ground between Wall Street and Sam's girls dress up for us.

Speaker 1:

But you're fantastic.

Speaker 12:

Yeah. I don't know. I I I really like this. I got it from, like, Bonobos. Nice.

Speaker 12:

And you need square.

Speaker 2:

There you go. That's great.

Speaker 12:

I think I think I like dressing up, like, at least a little bit, and it's it's fun.

Speaker 1:

That's good.

Speaker 2:

I like it.

Speaker 1:

Okay. Back to the business. Yeah. I I wanna know about it feels like you're you're training a very generalized model. What are you learning from the previous product launches where, you know, we had this ChatGPT moment and then I don't even remember what people were just kinda chatting with ChatGPT back and forth.

Speaker 1:

And they started using it they started using it kind of as a Google replacement, and then that kicked off the whole, like, Google's cooked narrative. And then with the with the Studio Ghibli moment, it was really the launch of, like, a better diffusion model with some reasoning in there, I think, and stuff. And so and then people were just like, this is a Studio Ghibli creator. And then they found that niche of, like, it's really good at creating cartoons. It's not quite style transfer, but that's what it does well.

Speaker 1:

How much do you wanna just, like, turn a wild open model loose and then hope that someone finds a killer app versus, like, you kinda know that this is gonna kill in CAD, and you're just gonna launch, like, Cursor for CAD on day one and then, like, go from there.

Speaker 12:

Yeah. I think there's I think the answer is, like, some combination. I'm like, okay. Short term, CAD design work somewhat generally are things that the current model is just, like, totally can't do. Like, LLMs are just, like or, like, anything that is an LLM harness is just, like, really, really bad at CAD, for example.

Speaker 12:

And so that seems like a okay. We know what to do there. We, like, you know, can just scale up this model. We have a bunch of Blender data, a bunch of three d modeling data in general, and we can scale up CAD. And then also, yeah, I I, like, am quite excited to release a more general, you know, tab model for people to to play around with and, like, figure out what it's particularly good at.

Speaker 12:

And so it's like when I'm asked, like, what commercialization plans are, it's like we have some reasonable idea of what the first steps are, but, like, there could just be this, like, massive thing once people start playing with it at that scale.

Speaker 1:

So you're training on video frames. 30 FPS video. Correct?

Speaker 12:

Yep.

Speaker 1:

I was told by an anonymous poster on x by the name of Rune that text, in fact, is the universal interface. Was I lied to?

Speaker 12:

Yes. Woah.

Speaker 1:

Shots fired. Explain. Elaborate. Like, doesn't this just collapse down to text? Why don't I puppeteer CAD from text?

Speaker 1:

Like, how does this all play together?

Speaker 12:

Like okay. I think it is in at at some point in in the, like, arbitrarily long future, like, if we only use text models, we could force, like, most things to be taxed. I think there are just, like, a lot of things that are much more native when done from, like, a computer use, like you know, GUIs are designed for humans. They're designed for, like, humans to use. We have, you know, this massive long tail of, like, things on the Internet that are, like, entirely undoable by LLMs.

Speaker 12:

For example, like, when I do ML engineering, right, because, like, most of my time is is not spent most of my time is just, like, spent doing kind of this grunt work of of engineering, and it's, like, a lot of looking at graphs and, like, analyzing graphs and and, you know, figuring out comparing loss curves or something. And, like, you can do this in text, but it's just a much larger pain than doing it in this kind of native interface, which is video. And I don't know. There's a reason why humans don't interact with a computer purely through text. It would kind of suck.

Speaker 12:

For example, we have, like, the concept of like, video has the concept of time in a way that text doesn't.

Speaker 1:

Mhmm. Speak for yourself. I got I got a text right here. Black background going.

Speaker 2:

If this can eliminate YouTube tutorials for software, that's a killer app. Is this yeah. Is this is that anything

Speaker 1:

It's not just gonna eliminate the tutorials. It's gonna eliminate the whole Yeah. The whole process because you don't need the tutorial if you're just like, just go do the thing that I need you to do. But yeah. I mean, you're you're you're obviously in the for a while.

Speaker 8:

How are

Speaker 2:

you thinking about go to market in general for the underlying technology?

Speaker 12:

Yeah. So I think, like, as I said, there there are the kind of short term, like, CAD design use cases. There's, like, the tab model. We wanted to just, like, give anyone a kind of general thing of, you know, in cursor, you press tab and and it completes your next edit or whatever. What if you could press tab and it completes, like, the next five and then ten and then sixty seconds of what you would do in your your computer?

Speaker 12:

And then I think longer term, it's just like, you know, we're training a general model that is able to do to do useful work, and you'll be able to, like, send it off off with a prompt to do work. And then, like, there's a very interesting thing where, like, the data that we're training on is very there's a bunch of, like, error correction built into it. So, like, when you have a bunch of data of humans doing things, a lot of the times, the humans make mistakes, and then they have to, like, correct those mistakes. And you don't get that with with text because, like, most texts on the Internet, you don't get to see the process of, like, you know, messing up and then and then fixing it. And so, yeah, I expect there to be a lot of, like, native, like, no URL just, like, prior of of doing the self correction thing properly.

Speaker 12:

Mhmm. So, like, you can you can get it to, like, go do something for ten minutes, and it'll, like, try something for two minutes and then and then, like, you know, mess up slightly. But it, like, knows how to fix that over and over again until it's, like, gotten to a soft state.

Speaker 1:

Yep. Very cool. Cool. Well, congratulations on the launch, and thanks for taking the time to come chat with us. Thank you

Speaker 2:

for sport coating. Yeah. I'm gonna we're gonna get some we need some sport coats around office. You need something in between. You got John over here formal.

Speaker 2:

I'm doing casual Friday on a Tuesday, but a sport coat perfectly in the middle. It was great to meet you, and come back on soon.

Speaker 9:

Thank you.

Speaker 1:

We'll talk to you soon. Cheers. Goodbye. Well

Speaker 2:

Back to the timeline.

Speaker 1:

Back to the timeline.

Speaker 2:

There was an individual Mhmm. Who accidentally gained control of 7,000 DJI vacuums. He was just vibe coding.

Speaker 1:

Amazing.

Speaker 2:

And accidentally found, according to Investment Hulk, the CCP Oh. Backdoor. He wanted to

Speaker 1:

control it with a gaming controller.

Speaker 2:

And then he just he got control of everything. That's so crazy. This is why I've been deeply concerned with letting any any foreign adversary flood our country with Mhmm. A bunch of robots.

Speaker 1:

Mhmm.

Speaker 2:

I think we should avoid it. Yeah. I thought this was funny earlier. Hag Seth says he'll order random pizzas to throw off the monitoring app. Oh, yeah.

Speaker 2:

I expected something like this to happen. Yeah. It's kinda silly that everyone has a dashboard up and can tell when things might be getting a little more tense Yeah. In the Pentagon. Yeah.

Speaker 2:

Give them a budget of, you know, a few $100,000 a year and just order pizzas at random times.

Speaker 1:

For the record, this is a joke and he is joking. But I do I do think they could throw it off. Potentially, you never know. They could they could they could throw it around. HubSpot acquired Starter Story.

Speaker 1:

Yeah. This is very exciting.

Speaker 2:

Very cool.

Speaker 1:

Yeah. Starter Story overnight success.

Speaker 2:

In a decade he's been doing this? I believe Pat, the founder had just posted Yeah. He posted something. He was like subset or he said HubSpot. HubSpot should acquire starter story and then like two weeks later it was done.

Speaker 1:

Wait. Really? Oh, I thought that was from like years ago.

Speaker 2:

Maybe it was.

Speaker 1:

I thought I thought he posted that a long time ago and then yeah. Said No.

Speaker 2:

He said 09/23/2025 So

Speaker 1:

Oh, well. Not long. Yeah. A couple months.

Speaker 2:

Six months ago. Couple months. HubSpot should acquire Shutter Story. The SEO ship is sinking. In my in my opinion, HubSpot needs to pivot way harder to video specifically YouTube.

Speaker 2:

Yeah. I'm biased but acquiring Shutter Story would take their YouTube game to the next level And I love it. He was quoting Brian Marcus. The CEO. Co founder saying, dear founders, it's a good time to sell your company.

Speaker 2:

Yeah. Love Brian. Okay. Yeah. Anyways, there's a bunch more stuff in here but we will get to it Tomorrow.

Speaker 2:

Tomorrow.

Speaker 1:

Arena Mag is out. Go check out issue number seven. They're on Substack now. Arenamagazine.substack.com. Go check it out.

Speaker 2:

I have one more post for you. Deep dish enjoyer says, I don't see what the point of shoveling snow is when AI agents are gonna commoditize burrito taxi services by 2028.

Speaker 1:

It's a

Speaker 2:

good Good excuse.

Speaker 1:

Leave us five stars on Apple Podcasts and Spotify. Subscribe to our newsletter at tbpn.com.

Speaker 2:

Have the best evening Have the of your of your entire life. We love you.

Speaker 8:

Goodbye. Nice

Speaker 3:

work, brothers. I'll see you on the next one.