Tractable

In this episode of Tractable, Orb CTO Kshitij Grover interviews Dane Knecht, the SVP of Emerging Technology and Incubation at Cloudflare. Together, they explore Cloudflare's innovative approach to infrastructure, AI integration, and developer platform. The conversation spans a discussion on their custom-built infrastructure as well as an upcoming product launch focused on AI, and includes insights into Cloudflare's unique positioning at the forefront of internet software products. Join us as we discuss the complexities of AI, developer platforms, and the future of the Internet.

What is Tractable?

Tractable is a podcast for engineering leaders to talk about the hardest technical problems their orgs are tackling — whether that's scaling products to deal with increased demand, racing towards releases, or pivoting the technical stack to better cater to a new landscape of challenges. Each tractable podcast is an in-depth exploration of how the core technology underlying the world's fastest growing companies is built and iterated on.

Tractable is hosted by Kshitij Grover, co-founder and CTO at Orb. Orb is the modern pricing platform which solves your billing needs, from seats to consumption and everything in between.

Kshitij Grover [00:00:04]:
Hey everyone. I'm Kshitij, cofounder and CTO here at Orb. Today, I have with me Dane Knecht. Dane is the SVP of emerging technology at Cloudflare. Dane joined the company back in 2012 as a head of product, has seen the company grow, and has a unique perspective having been there for well over a decade now. Cloudflare, as almost everyone knows, is one of the most used Internet software products today, powering customers as diverse as Shopify, Polestar, and Broadcom. Dane, welcome to the podcast.

Dane Knecht [00:00:31]:
Thanks for having me. Looking forward to chatting today.

Kshitij Grover [00:00:34]:
Awesome. Well, most folks probably know Cloudflare best from having globally deployed DNS, a bunch of security infrastructure, and really playing a pretty critical part in protecting the internet from security threats. Over the last few years, Cloudflare has also done a ton of work in the developer platform space. So maybe you can just kick off by telling a little bit about what your team does at Cloudflare, kind of the products you manage, and how you think about Cloudflare's product suite overall, especially given that you joined as a head of product.

Dane Knecht [00:01:06]:
When people ask me what our product is, I really think of it as a network. That's kind of the core of everything we do. We built one of the largest globally distributed networks in 300-plus cities, and 500-plus data centers, and have about 20% of the Internet sitting behind it. We start out by building that network and kind of the first, what we like to call it, the first act was providing infrastructure protection. How do we protect websites, APIs, and servers from attacks on the Internet? And as a side effect of how we built it, we're able to also do performance as well.

Dane Knecht [00:01:38]:
So where you get, everything from WAF, DDoS, CDN, caching, all of that together to provide a great experience for delivering your content to everyone, globally. Then our second act was we kind of flipped the network around. It's how we protect users and devices from the rest of the Internet. And that's where we launched the 1.1.1.1 on the consumer side and warp, and then brought those upmarket, with our 0 trust products and kind of complete a full SaaS solution. The way that product came about was, about about 6 years ago, 7 years ago, we kinda split product engineering into 2 parts. We have one part that's, kind of the core product engineering where kind of typical what you expect them to do. They focus on how we grow revenue. What will the customers need next quarter? What does the sales team need to close those deals? What are our customers asking for? And, that's about 80% of, R&D. And then, we create a new group, which, the group that I lead, the emerging tech and incubation, which we focus on basically how do we deliver new products to the market, things things that customers aren't even asking for, kind of invent the future for Cloudflare.

Kshitij Grover [00:02:42]:
And so one of the first things that came out of this group was the zero trust products, and they grew and then eventually kind of graduated, moved over into the core product. And then the kind of the third act, which I think a lot of people know us for now is the developer platform. And this originally was built for ourselves. So when I started doing the emerging integration, part of the goal was to take a lot of shots on goal, and failure is okay. Failure is actually celebrated. What I found was kind of the original platform was great and powerful with the network, but it was becoming slower and slower for us to develop new things. And so that's kind of where Workers came about. The best customers of workers is Cloudflare itself, where we build most new products on top of it, and it gives us an advantage where we can quickly deploy applications globally and in a way that's kind of meant for a global distribution from day 1. And so that platform kept on evolving built more internal products on it, and started adding more and more primitives to do things like storage and databases. And then slowly, we started exposing to our customers. And so the goal here is that the next 4th, 5th, 6th act of Cloudflare might not be something that we create or that comes from inside. Hopefully, the next startup that is starting to decide to build something decides to build on Cloudflare, and, hopefully, they invite us to their IPO party at the Stock Exchange one day, and they can say that it was built on a call from day 1, and we'd let them scale globally and to help them, build their business.

Kshitij Grover [00:04:01]:
The interesting part about that is you're catering to a few different markets. Like you said, zero trust a little bit more upmarket. It sounds like it's a deployed solution targeted at enterprise deployment. Whereas the developer platform is really more about other people building on top of it. What do you think about such a wide variety of product bets? Is that tricky? Do you feel like it all comes down to the same basic levers of maybe developer experience, reliability, scalability, like what's your kind of thought process around end-user products, as well as this kind of platform play?

Kshitij Grover [00:04:34]:
When we're at our best, the way we're building products is from the bottom up. Where we start with how can we offer a basic service for as little as possible and a lot of times free. And then really learn the product, get feedback, really iron out all the issues, and then go upmarket. This allows us to kind of really have a huge QA group of, feedback from the broader community, and then lets us kind of naturally organically grow over time to adding those enterprise features, which are really about control and support they need to run their businesses. There are different buyers of Cloudflare, whether you're buying for the CSOs orgs, buying for protection, or the CTO is buying for the developer platform. We do have a lot of different buyers but usually, the experience is consistent and great across all of them. And leads to when a customer starts with 1, it quickly spreads from inside the organization to the other products.

Kshitij Grover [00:05:24]:
One thing that you mentioned there was around how you think about launching in beta, getting feedback. And that's stuck out to me as unique about Cloudflare. I think Cloudflare launches a lot of very technical products, but my sense is that especially compared to comparable companies, they're fairly opinionated. I think the way that you expose interfaces, the way you do pricing and packaging, even the way that the launch and the technical blog posts around these products are communicated, tend to be fairly conversational, pretty opinionated. Even the way that Cloudflare talks about its own outages, I think tends to be very technical. So what's driving that sort of thinking. Do you think that is part of the thing that makes Cloudflare feel like a company that, and maybe is, is genuinely one of the companies that ships fastest? Tell us a little bit about that style of communication and product launch.

Dane Knecht [00:06:14]:
The opinionated aspect of it comes from, we have such a unique architecture. Of those 300 cities, we run a single box on the Edge, that all runs the same software. Every single application can run on it. Any customer can hit any box and have their request or any feature that they have responded to from it, which requires us to build pretty opinionated. Because in order to build in a way that kind of fits into that architecture, which provides huge advantages such as scalability and cost then it also does require you to be a a little more opinionated in how you build things.

Dane Knecht [00:06:45]:
But the technical side, I think that's just kind of how we've grown up. We were solving some very hard problems early on, and so we just always had a very technical engineering team that also love being educators and explaining things. Our blog from early on, what was designed as something for, engineers to read, not for marketing. It's actually today still run by our CTO. He's kind of the editor-in-chief. There's a formula for what makes a good post and what doesn't make a good post. The technical nature of things, how are we explaining hard problems that other people will find interesting. And the other reason for that is trust.

Dane Knecht [00:07:15]:
I think when you talked about our outages and things like that, when we on our blog, when we go into technical detail, I think that helps breed trust in the platform. It shows that we are letting people know exactly what's happening, whether it was good or bad, and, not hiding from those failures. I think everyone has issues. I think it's how you respond to them is what people how people will remember you.

Kshitij Grover [00:07:35]:
Talking about just outages or trust in general, I know you focus a lot on new product innovation, trying to invent the future at Cloudflare, but as a company and as a brand, you protect the Internet. You talk a lot about the uptime of the Internet. And so when there is a Cloudflare outage, you're in the unfortunate position that everyone notices or fortunate position given how widely deployed Cloudflare is. So that comes with some expectations around reliability and scale and maintenance of existing products you've already launched. How does that affect what your team launches? Do you feel like you have good isolation where launching a new product is pretty well isolated from the rest of the stack? Do you feel some sort of even just like brand and almost marketing constraint around new innovation and experimentation?

Dane Knecht [00:08:21]:
We act like we're critical infrastructure because we do power a large portion of the Internet, and we take that responsibility seriously. I mentioned that we started the workers' platform kind of as a response to what it was getting harder and harder to develop things. And part of that was because we do have to be very careful for anything that we change. And if there's a platform that doesn't support, provide isolation and multitenancy, then it is gonna slow you down. So that was kind of the beauty behind workers is we made a platform that that that not only provides enough isolation and protection, that our security team lets our engineers deploy on it. We built it in a way that actually allows millions of engineers across any company to deploy it. So the fact that we can deploy untrusted code to our edge, whether it's used by us or used by our customers, I think actually is how you have the flexibility to be able to continue to ship and innovate fast because we we know that we can isolate things.

Dane Knecht [00:09:09]:
We can control the blast radius. We talk a lot about blast radius when we talk about things. And even things that are kind of more lower level, the way we deploy things, we have a system called release manager that does something called health mediated deploys. So each step of deployment has certain SLOs that the system itself is checking, first deploying to canary colos and slivers of them, and then and continues to expand that until it reached kinda globally. But each step of the way, we're monitoring those SLOs and making sure that if anything's breached, that we immediately kinda roll back and control that blast radius.

Kshitij Grover [00:09:42]:
It's interesting. It sounds like this is sort of what you were saying earlier, but I'm driving the thread now, which is Cloudflare starting position as thinking a lot about managing this really distributed network has baked itself perhaps quite naturally into the way that the developer platform is designed and the way that you think about shipping updates to the developer platform and how you design things like workers. So one question I'm curious about is it sounds like you've made some really good architectural bets with the workers' model built on isolates. Are there any architectural bets that you feel like you've had to backtrack on or that you've you guessed at in a direction would be positive, but didn't end up quite panning out?

Dane Knecht [00:10:23]:
I'll talk about a recent product launch. A few years ago, we decided that we were gonna start making the transition from being a completely stateless edge to a slightly more stateful edge, the type of applications that people wanna build. And, naturally, as we get a more full platform or have state attached to them. And so we decided, to roll out a product called D1, which was a SQLite database, that was accessible from the edge from workers. And we made an early bet, that we tried to build the entire thing, kind of on ourselves. We built it on top of durable objects, which is one of our storage primitives. We built use SQLite compiled to Wasm with a custom VFS adapter to resist the database to the durable object.

Dane Knecht [00:11:04]:
And we spent a lot of time. We announced it. We shipped it. It worked, but it was not very performant. It did not really kinda meet a lot of, the needs of our customers. And we looked really hard, and we said, "Do we double down and say, okay, let's figure out how to get the this the Wasm to work" and ultimately decided it was just too hard to debug Wasm in its current state. We did learn a lot.

Dane Knecht [00:11:26]:
And so for the future, we know the tools that we are continuing to build those tools so that customers can build things more natively on Wasm. And, so we had to kinda start over. So we took those learnings, and, we set out to integrate SQLite natively into our run time. We built a custom storage engine, that replicates data across, all of our machines, for durability and for localized reads. And it's been able to move a lot faster now. But I don't regret the fact that we kind of made that first bet to try to build natively completely on our own platform for the new product. Because I said, we learned a lot. And but we were able to say, "Okay.

Dane Knecht [00:12:06]:
this wasn't the right thing. This is not the right platform for building it, and let's go back to the drawing board and build this the right way that's gonna scale and meet kinda customer needs long term."

Kshitij Grover [00:12:17]:
That makes sense. And so when you think about the developer platform, how do you determine or what's the kind of product principle for figuring out what primitives you expose versus how many products that compose those primitives do you build? Is it someone could implement a lot of these things, as you're saying with doing a bunch of work on top of durable objects, maybe it'll work for their use case. Maybe it'll work better for their use case than the generalized version. So how do you make that call? That seems like a really hard thing across a very diverse customer base to figure out.

Dane Knecht [00:12:47]:
Well, I mean, in as I mentioned, we are our own best customer for this. So when we're building a lot of the new products across the company, in general, we say, tell developers, give it a try. Try to build it on workers and get the team feedback and say, okay. This is what's gonna be hard. This is gonna be easy. And then we can decide whether we need to double down and invest on creating an easier way for them to build those things. Or we say, okay. This is probably not the right platform for this, and they'll build something a little more natively.

Dane Knecht [00:13:16]:
But just the fact that we have that internal set of teams that are just trying to always build on top of us, is just that feedback loop is just so much quicker than having to get kind of external feedback. And in general, it's sometimes it's not the right time then, but then we could come back later and take those learnings for that. Like, I mentioned that the Wasm that we built, the D1 version on Wasm, and it wasn't it didn't have tools for core dumps and other observability things that were needed. And then, last developer week, we launched Python support inside of workers using pyodide, which is a Wasm compiled Python. And we're able to take a lot of learnings from the original D1 experiment there, and it was the right platform this time. It did give us the tools that we need, and we're able to build the things in support of that now we can use ourselves and our customers will have access to as well.

Kshitij Grover [00:14:02]:
How would you characterize the vision for the developer platform maybe medium or long term? That there are other cloud providers out there, AWS, UCP. Now Cloudflare has some product offerings, maybe a lot of the core primitives that match up to those. How do you differentiate between those providers, and where do you think Cloudflare has massive technical strengths?

Dane Knecht [00:14:26]:
Every workload is not gonna be designed for our edge. And what we wanna do is we wanna go figure out which of those workloads are best suited at us and, make sure our customers know that and build on top of us. When a customer wants to build something that is both easy to deploy and initially and know that they have the option for it to just immediately scale globally where the data needs are kind of are smaller. I think that's a perfect place for us. So if you're building most SaaS apps that have a rest interface and some business logic can easily be built on our edge today. Like, that not true, a couple years ago, but today, I think we have all those pieces. But when you're going back to do what your analytics, your AI training that data should be at a hyperscaler, and the edge is not designed for those kind of power requirements needed for that.

Dane Knecht [00:15:15]:
But if you wanna build in a way that meets kinda like the global nature of the Internet, Cloudflare is, I think, the right architecture for that. And especially as we're seeing more and more things around data sovereignty, data localization, where people want their data to be kept in country and region, a lot of, being mandated by governments. The Cloudflare Platform kind of takes those kind of things that are usually hard to build and just kind of makes them easy, kind of built into the same platform they already on.

Kshitij Grover [00:15:44]:
Are there use cases in your journey at Cloudflare that have surprised you? And I'm sure there've been different ones at different points in time. Like the one you just talked about, I imagine you all didn't see that necessarily coming with GDPR and all the data localization requirements, but even today, are there use cases, whether it's at the developer platform or the product offering as a whole, where you're particularly impressed, excited, or even just surprised that they exist?

Dane Knecht [00:16:11]:
I think it's more of a when I see a single person that's able to build something that, would have been a previously an entire team, that's what I'm like, "Wow! Look what we did." And so I love this whole movement towards kind of self-hosted components. And so I've seen recently people build analytics that is built on top of workers, single guy that can now be deployed into your own workers instance. And, I think about in the past, that would have been a whole team of people. Or I just got back, from Lithuania, and I met some, lot of developers there.

Dane Knecht [00:16:42]:
And I got to meet a team that's building an AB testing platform where, two people to build a platform that, I think, could compete with some of the largest providers out there. And I think about what it would have taken for them to build that if they had to be managing Kubernetes and all those other things that AWS resource groups and everything else there. I don't see how it's possible for two people to build something that polished that can they already know can go from scaling from 1 person to millions of people.

Kshitij Grover [00:17:10]:
That's interesting. That makes a lot of sense, and it sounds like it's really aligned with making, the Cloudflare developer platform the easiest place to start, the easiest place to deploy, and a lot of similar applications you were talking about. I'm wondering, like, is there a downside to over-abstracting? And maybe this is too cynical, but what I mean by that is there was a lot of talk maybe 6 months ago now around what advantages does the edge have and people, I think we're just missing some basic reasoning around, well, if you're making a lot of round trips to a centralized database. Like, you're obviously gonna slow down your request. And I think some of that was people just missing the basic understanding of, like, what is their application actually doing, the ability to kind of profile it, the ability to understand what's going on under the hood. Do you see that as an arc or do you think that the more that the platform can do for you and abstract away from you the better it is, and that is the right direction to head in.

Dane Knecht [00:18:07]:
I think we would have built a platform that it just works for the average person that wants to go deploy something, but still has the ability for people to go fine-tune and have the knobs and dials for people that want to dive in deep and profile and try to figure out how to optimize their applications even further. And the architecture is different in the way you have to think about it. Just thinking time about the round trips. What you really want is you want the end user to be as close as possible to the compute that's doing things like rendering. But at the same time, you want when there's a lot of the business logic going on doing the database queries. You want that to be as close as possible to the data. And, obviously, in the best case, it's a distributed data source that's globally available everywhere. But, you know, in reality, most people are building on top of legacy applications today, and they probably already have a database sitting somewhere most likely Ashburn where they need to go reach back into inquiry.

Dane Knecht [00:18:56]:
And the idea there is you actually wanna separate those two things out. You want the part of the edge that's closest to the eyeball of the user to do one set of compute, and then let it go call, kinda do a remote call to a machine that's closest to the database and let it do all of its business logic there and then just return when it's ready to bring it back to the UI layer and deliver it to the user. So but that requires a lot more sophistication. And so I do think we need to provide abstractions to just make that work for everyone. And I think that's on us to continue to make those things easier.

Kshitij Grover [00:19:28]:
That makes sense. And to me, it seems like as long as you're providing super clear contracts and at least people understand the contract. So for example, if you're replicating data, there's probably some sort of eventual consistency contract somewhere. And I think the thing I worry about is people not really understanding that. And, and maybe in the common case, it doesn't really matter, but then there are certain business critical cases where that is going to bite them. Right. And ideally you don't want people understanding that architectural limitation only when it bites them and when there's a bug that they, or a fire they have to put out. Right.

Dane Knecht [00:20:01]:
Yeah, absolutely.

Kshitij Grover [00:20:02]:
So let's go through a couple of stats to contextualize Cloudflare's scale. So I was looking these up before this interview, and these might even be out of date now. But, you know, something like 280 terabytes per second, tens of millions of requests per second and trillions of DNS queries per day. What numbers do you track? And so these are the kind of company-wide technical metrics, but what do you care about day to day? What makes them meaningful?

Dane Knecht [00:20:28]:
So my team's focused primarily on adoption. I want as many people using the platform as possible. So in the developer side of things, it's how many active developers are using us? How many people have been on the platform and made changes in the last 30 days? How much data is being stored? Which that one metric can then really tell you a lot of things about how's R2 doing, how's D1 doing, and really how is our transition from being a completely stateless environment to being a stateful environment? I like to try to find derivative metrics that can kind of, you want to track a couple of them, to know, directionally how the business is going.

Kshitij Grover [00:21:01]:
You've mentioned the stateless to stateful transition a couple of times. Can you tell us a little bit more about that? What does that involve? What are the kind of architectural bets there?

Dane Knecht [00:21:11]:
Cloudflare was a completely stateless environment where we when a request came to us, it uses any cast to get to the closest data center. And then within the data center, it used ECMP to find the right server to go to, load balance across them. And any of those machines could handle any request, and you want to add more capacity or horizontally scaled. Just add adding machines in one of the regional colos, and then you added more colos to get continue to get closer and closer to the user. But in anything there, if a machine got shut down, not a big deal. It would just repopulate the cache. There's no state.

Dane Knecht [00:21:43]:
There's nothing that could be lost that would be a business-critical issue. It was everything was designed to completely fail. And still true today, we design everything to completely fail, but we have to take into account that our machines hold valuable data for our customers. So they hold to their D1 databases, the durable objects, R2 and we need to be a little more cognizant about how we manage that data, how we manage the machines, how we're replicating data, but everything would be from between racks, between cities to make sure that we can continue to maintain the trust that customers have put in us. But it's very different than the stateless world where, you know, if a rack got turned off, a request might get reset, but no big deal. Right?

Dane Knecht [00:22:19]:
And now we need to kind of take into account where data is at any given time.

Kshitij Grover [00:22:25]:
And when you're tackling a problem like that, how deep are you going into the implementation of it? Not you personally, but Cloudflare as a whole. Like, I don't actually know this, but are you getting to the level of designing placement in the data center? What is the level of or or how low level are you having to solve that problem versus rely on vendors or or other services?

Dane Knecht [00:22:48]:
In order to get to the cost structure that we need to be able to offer things like our free services, we really rely on very few vendors and almost everything is kind of custom-built for us. Even very early on, we were very small. We did ODM boxes and built our own boxes where we got to choose every chip, every thing that went inside it down to the we've run our own custom kernel on Linux, for years and then all the way up the stack where at times we used open source software. But at this point, most things have been rewritten just because of the unique nature that of what Cloudflare needs is just so different than anyone else. But we, we try to, control the kind of supply chain of a request kind of ending as much as possible.

Kshitij Grover [00:23:25]:
So from a hiring standpoint, and I imagine you're thinking a lot about deep composition. Does that mean you need pretty deep domain expertise for a lot of these projects? Like it does feel like that's the sort of thing that maybe you can project out a couple of years, but I imagine some of these things, you only realize you need to really optimize a component once you get there and you might not have the experience on the team to do so. So has that been a challenge or do you find that hiring really fast growing generalists has gotten you a long way?

Dane Knecht [00:23:54]:
The distributed system engineers, those have kind of always just come to us because we have some of the hardest problems on the Internet. We're we're we have a harder time as hiring as people that wanna work on the dashboard and and UI just because people tend to gravitate toward where the hardest problems are. And for us, that generally is how you run an edge of this size. So, I think we've been very fortunate in that perspective, and our blog is a lot of credit thereby exposing kind of those techno problems. So it really gets people excited to wanna come and and and work on them. That's really never been a problem for us.

Kshitij Grover [00:24:27]:
It sounds like the expertise is naturally people who see a blog post about some low-level optimization that you've done today will gravitate towards Cloudflare in the future. Maybe let's talk about AI. So Cloudflare has invested into AI. There's for example, a gateway. And in fact, today I was reading the blog and Meta announced new model. I think that's already available on Workers AI.

Kshitij Grover [00:24:56]:
So what I'm seeing just from my read is a lot of focus on simple developer experience, like trying to take best-in-class developer experience, whether it's open AI or a different company and translate it into the broader ecosystem. So I'm curious, like what is the Cloudflare thesis around AI and how do you think more broadly about the next decade of technical investment in AI?

Dane Knecht [00:25:19]:
We wanna make AI something that's just part of every application. Now that you're going out to build an AI application that you're you're building an application that solves a business function, and you want to, use AI in it to, kind of further that goal. And we wanna make it so that it's just easy to use just right inside the platform that you're already building on, hopefully, and, kind of abstract away. You don't have to go get a server which has access to a GPU and figure out how to run a model. We take all those things and kind of bundle them up into a kinda a serverless primitive and do all the hard work to make sure that the models are available to you and there's being scheduled and there they can scale out. Or when Meta releases a model, there's a good chance we've already been working with them and are ready for it to go live as soon as they post about it. And so making sure that everyone has access to kind of latest and greatest there as well.

Dane Knecht [00:26:07]:
The thesis is just make it easy for people to build AI-enabled applications. Today, I think performance, just because of how long it takes tokens and things like that to generate and work out for normally. We talk a lot about performance. You know, what we're hearing more from, companies that are actually building AI-enabled applications is they're they're getting a lot more pressure than even previously for data sovereignty, where company companies want, their inference to stay in country. They want their citizens' data to stay there, and we kinda naturally allow you to do that inside the platform today.

Kshitij Grover [00:26:38]:
That's interesting because that's a place where Cloudflare is uniquely able to help and and something that would be hard to replicate and easier to do given the existing infrastructure investment. So you mentioned that you kind of wanna make AI a natural part of building an application, as opposed to like a specialized workload you have to think a lot about. How does that translate to things like vector databases? Do you think that a lot of the implementation details or just even having specialized data stores will eventually converge over time? Or do you think there's going to be very specific technologies catered towards AI workloads?

Dane Knecht [00:27:15]:
For most things, they'll converge over time. A vector database, in some instances, it's just, you know, a semantic search. Which has been around for a very long time. Now packaged up as a vector database. I expect those concepts to kinda get built into almost every database. But when there is specialized use cases, I think we wanna build developers with those tools. But in general, I think the more that everything about AI just becomes part of the natural developer workflow is actually better for AI and better for us starting to see real business value come out of it.

Kshitij Grover [00:27:51]:
There's a lot of talk now, especially now that we're well into the AI investment curve. Lots of companies have invested a ton of R&D into AI, especially the larger ones. What is your kind of personal take on business value versus this is a large experiment? Like, is this a 2, 3 year return cycle? Is this a 10, 20 year return cycle? I'm curious how you think about that, especially from the viewpoint of building the developer platform where you're enabling a lot of smaller companies probably to use it effectively.

Dane Knecht [00:28:19]:
I think so somewhere in between. I think some companies are quickly finding kind of those really niche use cases that they can apply AI to. I'm hearing a lot of companies that are able to answer 50% plus of their support tickets to AI with higher satisfaction. But then companies with complex code bases, where things like Copilot and that, provide some benefit, but not necessarily. They're not writing the applications and code for them. I think the more that people focus on what's the business process that exists today that we can apply AI to, the faster that we will start to see kind of that inference return on it through inference on all that training that's been done.

Kshitij Grover [00:28:55]:
It's interesting because I think you're right. I think people want to almost swap 1 for 1 with something they do with AI and hopefully, it's cheaper or faster and more efficient. One thing that strikes me a little bit strange in that vein is lots of people are generating code with AI. And it's not clear to me that, like, code seems like an intermediate artifact. It has a lot of good properties. You can inspect it, but you could also imagine AI just accomplishing the task, like updating your static site instead of spinning out a bunch of JavaScript and CSS. Does that resonate? Like, do you think these current output formats are sticky, or do you think we'll we will see some kind of leap moments where they're no longer generating the things we're used to them generating?

Dane Knecht [00:29:39]:
The more you go towards those tasks and versus kind of I don't think anybody needs another chat interface. Chat was a great interface for, kinda showing off the power of it, and it is amazing what it's done for, just the industry as a whole. But I think the real value will be from completing tasks and actually performing those business functions end to end, which a lot of exciting work with, like, agents that can do multi-steps and can in natural language kind of solve those tests.

Kshitij Grover [00:30:08]:
Well, this has been a great conversation. I think I wanna end on just what are you most excited about at Cloudflare looking forward? It could be a product launch, a thing that you're deepening, or even just something unrelated to a technical product.

Dane Knecht [00:30:22]:
Cloudflare is known for our kind of innovation weeks, launch weeks. Our first one ever was around our birthday. And so every year, we have our birthday week and where we basically launch things that are really focused on getting back to the Internet. So deep in planning cycle right now, and really excited for some of the announcements that are gonna come out of that. Definitely a lot of things around AI, that I hopefully will be able to, bring to our customers.

Kshitij Grover [00:30:45]:
Awesome. Well, I know you can't say much more than that, but that sounds exciting and great having you on the show. Thanks so much, Dane.

Dane Knecht [00:30:52]:
Yeah. Thanks a lot.