Braintrust by Cortex

Nathen Harvey, who leads the DORA research program at Google Cloud, joins Ganesh Datta to explore how AI is reshaping software delivery—and why the fundamentals matter more than ever. Nathen shares insights from DORA's 2025 report on AI adoption, the surprising finding that AI acts as an amplifier of both good and bad practices, and the critical shift teams must make from chasing elite performance to driving elite improvement.

Nathen and Ganesh discuss value stream mapping as a planning tool for 2026, why SLOs are becoming more important in an AI-first world, the trust paradox in AI adoption, how to use AI as a catalyst for cultural change, and why measuring lines of code generated by AI misses the point entirely. They also explore how foundational practices like working in small batches, strong version control, and user-centric focus become even more valuable when AI is writing more of your code.

What is Braintrust by Cortex?

Candid conversations with the builders shaping the future of engineering.

Braintrust dives into the operational realities of running high-performing engineering organizations, from production readiness and migrations to AI adoption and operational excellence.

Hosted by Ganesh Datta, CTO & Co-founder of Cortex

Nathen Harvey:
The actual number and comparing yourself to another team, these things aren't super valuable, but what is super valuable is contextualizing those numbers and figuring out, what do we need to fix as a team? What do we need to improve? And then doing that work.

Ganesh Datta:
You're listening to Braintrust by Cortex, where we explore how engineering leaders blend AI, platforms, and culture to build high performing software teams. I'm your host, Ganesh Datta, CTO and co-founder of Cortex, an internal developer portal designed to help engineering teams ship reliable software faster with AI. In each episode we go deep with CTOs, VPs of engineering, and technical leaders who've been in the trenches, navigating the tension between speed and quality, building reliability at scale, and figuring out how to lead through major platform shifts. Whether you're running a team of 10 or 1,000, this is your space to learn from people who've made the hard calls and live to talk about it.
Hey, Nathan, welcome to the podcast. Excited to have you on. I'm Ganesh, I'm one of the co-founders and CTO at Cortex, an AI-powered developer portal helping you achieve engineering excellence. You want to give us a quick introduction of who you are and what you do?

Nathen Harvey:
Yeah, sure. Hey, Ganesh. It's super exciting to be on the podcast with you, and it's great to see you again. And hello everyone, my name is Nathen Harvey. I work at Google Cloud and I run a program called DORA. You might know DORA. It's a research program that looks into how do technology-driven teams and organizations improve. Our tagline is, "We want to help you get better at getting better." This is all about a journey of continuous learning and continuous improvement.

Ganesh Datta:
I love that you guys have a very specific definition of what that entails, and I love the fact that it's focused on the improvement piece of it and not necessarily the measure piece of it, so I love that.

Nathen Harvey:
Yeah. It's funny, because a lot of people, I don't know if funny is the right word, but a lot of people first learn of DORA, or when they first learn of DORA they identify with the DORA software delivery performance metrics. Sometimes they just call them the DORA metrics. The reality is that there's a lot more than just those four metrics, which are now five, by the way. But there's a lot more to the research program than the metrics. And I think it's really important that we remember, yes, measurement is important and having metrics help us understand and orient where we are and where we're headed, but it's the improvement work that you do that goes into those metrics that is the more important piece. And for me, the much more interesting piece as well. In most cases, probably the harder piece as well.

Ganesh Datta:
You already kind of touched on what I think may be your answer for my first question, but I'll ask it anyway. If you could fix one misconception that people have of DORA, you wave a magic wand, what would that be?

Nathen Harvey:
Because I already gave you one answer, I'm going to use that as an opportunity to give you a second answer. So, I'm going to fix two things. First, it's more than the metrics. But then I think the bigger potential challenge misstep that we took, DORA for many, many years has published sort of industry benchmarks. And when we look at those software delivery performance metrics in particular, we do a cluster analysis and come out and say, "Oh, well, here's what a low performer looks like, a medium performer, a high performer, and an elite performer." The reason I think that's a mistake is that too many teams, and more importantly, too many leaders just look at that and jump to the wrong conclusions. They say, "Look, every team in this organization has to be an elite performer. So, if you're not, you're in trouble, and let's get you there. You got to get there."
Whereas I'd much rather see elite improvement than elite performance. It is really about understanding the context that a team is working in, and how do you drive improvement over time that is sustainable? So, the actual number and comparing yourself to another team, these things aren't super valuable, but what is super valuable is contextualizing those numbers and figuring out, what do we need to fix as a team? What do we need to improve? And then doing that work.

Ganesh Datta:
That makes total sense. Yeah, I know we've talked about this in the past. The idea that you should be focused on being a better version of your own team, your own organization, and the hierarchy of benchmarking, if you will. As a team thinking about yourself and your previous state and seeing if your team is better, if you're a VP or CTO, then maybe those industry benchmarks are useful as a, "Hey, what should I be thinking about?" Not so much, "Oh, my org sucks and that team's great." It's more like, where could we be? Where can we try and improve? I think it really comes back to that improvement, so I really love that.

Nathen Harvey:
Yeah, absolutely. And the benchmarks, certainly they have a purpose. They are valuable. And if nothing else, they're valuable in saying that, "Look, teams and organizations in my industry vertical, they're able to achieve this level of performance." So I know it's possible. And I can't use things like the industry that I'm working in or some other constraints as a reason not to try to improve. It is possible for teams that look like mine, so let's go on that journey now.

Ganesh Datta:
Yeah, it's kind of like the art of the possible. We held our conference recently, IDPCON in New York, and I think one of my takeaways was exactly that. People were really interested in learning what other people in their own kind of industry type orgs were doing. For example, we did a panel on developer experience in media. We did another one about the platform engineering self-serve ideas, but specifically in regulated industries. And a thing that we took away from that was people who also worked in those industries were very interested. Because I think people have the same pain. And so they look at that and they say, "Well, if they're able to do this kind of thing, then we should be able to do that kind of thing too." So, I think that is really interesting.
When we talk about improvement, obviously you guys released the latest report with a focus on AI as I'm sure everyone has been very excited about. Do you feel like there are a lot of organizations who are moving and making improvements over the last couple of years and then investments in AI unwound that? Do you guys find that people are actually making it easier to improve? Where do you think that has led to in organization's trends?

Nathen Harvey:
Yeah, this is a really interesting question. I think one of the big takeaways from the 2025 DORA report is that AI is an amplifier. And so, if you're working in an organization that has those foundational principles and practices in place, AI is going to amplify that and really help you start to accelerate even more. On the other hand, if you're working in an environment that's very disconnected, there's a lot of friction and chaos that happens within your organization, introducing AI is probably going to make the pains of that chaos, you'll feel them more acutely. So, it's going to amplify that pain as well. I think it's really important that as we think about what are we doing organizationally and how are we bringing in AI? Of course, there are mandates from above. We have to use more AI. There's practitioner level excitement. It's so much fun to use AI, and we can do different things that we maybe couldn't do or didn't want to do before, so that's really exciting.
But really channeling the energy and the focus of that on some of those underlying core principles. In DORA we've always talked about having an organization that has a good climate for learning. Introduce AI, you still want that climate for learning. You need fast flow and fast feedback. You still want those things. And maybe you want them even more today with AI that's accelerating in new ways and giving us new ways of working that we didn't have before.

Ganesh Datta:
Do you think organizations have realized that? Do you see organizations investing more into things like infra, core platform or CI/CD or security and production standard, all the things that in theory give you the right foundation? Have organizations realized that, or are we still too early for that?

Nathen Harvey:
I think yes, both we're still too early, but also we're starting to see some signals of organizations starting to realize that. As an example, one of the things that obviously DORA looks at is software delivery performance. And when we look at 2024, what we saw was that as you increase AI adoption, both throughput and stability get worse. So, you're shipping less frequently and those shipments tend to be rolled back more frequently. In 2025, if we look at those same metrics, what we see is throughput is actually increasing. So, you're shipping more frequently. That instability issue still persists. But I think that's kind of an early signal of us starting to adapt, and thinking beyond just using AI to generate code, but thinking about the entire delivery lifecycle. I think, of course, it should also be said that the tools are adapting as well. The models in 2025 are better than they were in 2024, and it's not just the models, it's also the ways that we're interacting with them.
And we're learning about maybe better prompt engineering practices and maybe even going from prompt engineering to context engineering. As an industry, we're evolving. So, I think it's still very early days, which is to say that if you're just getting started on your journey, you're not far behind, you're okay. And you can take lessons from those that are ahead of you and maybe incorporate those into what you're doing as well.

Ganesh Datta:
Yeah. I mean, for what it's worth, we're seeing the exact same thing in our data. There's a little bit of a sneak preview report that we're going to be launching as well. We did a survey of entering leaders and we looked at our own data from our customers and we saw the exact same thing. Throughput is increasing. It's a meaningful lift. It's not a negligible amount of lift, but there is also non-negligible increase in resolution times and incidents and things like that, particularly in medium and high-severity incidents. So, there's definitely some sort of impact to it. Do you find that organizations are okay with that trade-off? Are organizations thinking about, "Hey, the throughput is worth it and we'll figure out the incident thing through process and improvement over time and we'll take the benefits of it." Or is it keeping people concerned? Are people slowing down and thinking about their plans for 2026 in a different way as a result of that?

Nathen Harvey:
Yeah, I would say that people are definitely concerned, but I think you're onto something here where, look, when you introduce a new tool or a new way of working, the first thing you should expect, regardless of what that tool or new way of working is, is a dip in productivity. You do something new that changes your workflows, changes your habits, it's going to take some time to adapt to that. Eventually, and we don't really know, it's obviously up to the team how long is eventually. It might be a day, it might be a week, it might be a couple of quarters, but eventually you're going to start turning that around and hopefully gaining some productivity improvements from that new workflow or from that new tool. But I also think that as organizations there's always, always, always pressure to move faster.
And so, if we see both throughput and stability dropping off, I think it's only natural that as a team we focus on, "Well, we better fix that throughput first. Let's make sure that we can ship quickly." Because also when you can ship quickly, if something does go wrong, at least you know you can quickly remediate it. And so, maybe fixing throughput first and then going back to shore up your stability and your feedback cycles and so forth, I think that makes a lot of sense and it's something that we'll just naturally do.

Ganesh Datta:
That's interesting. I mean, sometimes I think about it the other way, maybe here's a counterargument. If an organization is moving too fast and they don't have the policies or processes in place to mitigate incidents and whatnot, could that lead to a self-fulfilling cycle where you're pushing more code and then there is an instance, because there's natural instability as we're learning this new process, but you didn't actually write that code yourself and now all of a sudden you don't know what to do about it when there's an incident. Is there value in, I guess, in investing in some foundations before you let AI accelerate the throughput?

Nathen Harvey:
Oh yeah, absolutely. I definitely think that we have to look at the entire system, and it's not just allow AI to write more code and we just ship that yellow into production. That'll improve our throughput, right? Yeah, it probably will, but whoa, please don't do that. So I think this is also why one of the things that we look into in our research is like what capabilities and conditions go along with AI adoption to really drive performance. And when it comes to this, we found things like working in small batches is a really powerful way to amplify the effects of AI. So yes, AI can write code a lot faster for you, but we shouldn't give up on those foundational principles of things like working in small batches. Let's ship small changes to production. Maybe with AI we can ship twice as many small changes, and because they're small, they're easier to roll back or to reason about if something goes wrong.

Ganesh Datta:
It sounds a little bit like the things we were saying were good three years ago are still good.

Nathen Harvey:
Yes, yes. I would definitely agree with that. In fact, I would say that some of the things that we've learned over time, like they're durable, they are foundational principles for a reason. And those are things that we should be investing in at least as much, if not more, now that we have AI that's come along and really ... The biggest visible impact from AI is you can generate a whole lot more code in much less time. So, how are you going to manage all of that change? You go back to those foundational principles that we've been talking about and improving over the past decades with software engineering.

Ganesh Datta:
That's exactly what we're seeing in our customers that are adopting AI successfully as well. I think it's an increased emphasis on the foundations, because the way I think about it is, like you said, if coding assistants are really good at writing code, so much of our time as engineer leaders to spend thinking about that problem, like how do we write more code and increase the throughput and all that stuff? And coding assistants are helping us do that naturally in a much easier way. So, if that's the case, then I can spend more time thinking about the foundations in order to accelerate the throughput. And so, for most of the human spent time, it's almost like a higher bang for your buck, spending that time on the foundations, everything from building resilient systems and thinking about things like unit testing and monitoring and SLOs and all that kind of stuff, very basic fundamental principles that we cared about. Those things make it easier for us to adopt AI in a more meaningful way.

Nathen Harvey:
Yeah. And to get more out of the AI that we are adopting, which is, I think, super important. And one of the things that I think is really important here and is well worth the investment is going through an activity like value stream mapping to really understand what are all of the steps, all of the handoffs, all of the people and processes involved in getting a change from an idea, all the way through to a customer saying thank you. Let's map out that entire process, make sure we understand it collectively, and then we can identify areas where we want to improve because every time I run a value stream mapping exercise with a team, 100% of the time there are crazy aha moments, there's friction that's identified, there's duplicative work that's identified. There are steps that are happening in that process that simply don't need to happen.
We don't even need to think about automating them. We can just get rid of those steps. They're not adding any value. They're not helping us in any way. Let's just stop doing that. And having a to stop list is way better than a to-do list, I would say.

Ganesh Datta:
Do you have any examples you can share about value stream mapping they've done recently where people have found obvious things like that where it's, "Hey, we should just chop this whole thing, or there's obvious room for improvement."

Nathen Harvey:
I mean, the one that always sticks with me, it kind of haunts my dreams, honestly. There was an organization that I was working with, and like many organizations they have a bunch of teams that touch the code as it makes its way into production. One team was the development team, the software engineers that write the code, but then they also had a release team that packaged up the code and shipped it out. Well, in their process the development team, the software engineers would write the code, they'd turn it into a package, they'd put it into a test environment, they'd test it, make sure it was good to go before they handed it off to the release team.
The release team wouldn't take the packages from the software development team. They wouldn't even take the code from the software development team. They said, "We want to make sure that we're building in a pristine way, so we're going to copy your Git repository over here. And now we have a second copy of the Git repository and we're going to build our own artifacts from this and then start testing those." To me, it was clear that the release team didn't really understand how version control works. They were making a complete copy of the repository. And so, there was definitely waste in that that we could completely eliminate.

Ganesh Datta:
That's incredible. It sounds like this exercise, actually taking a step back. A lot of engineering leaders are in their planning for 2026. They're thinking about roadmap, they're thinking about tech debt, they're obviously thinking about AI. Is going through a value stream mapping exercise like this a big time sucks, like, "It's November, I have two months I have vacation." If I'm an engineer leader right now, do I stop what I'm doing and go through this exercise to figure out what to do for 2026, or is there a better way to attack that?

Nathen Harvey:
So, first my advice is yes, stop what you're doing and get together with your team and run a value stream mapping exercise. I think it's one of the most valuable things and valuable investments that you can make. In a day, with bringing together representation from that cross-functional team in a day you're going to discover so many rich insights. And frankly, as we head towards the end of the year, maybe work is slowing down a little bit, it's a great reason. Maybe you have a holiday party. The day before the holiday party, get together and do a value stream mapping exercise. That's going to build better connections. You're going to have more fun at the party because you did that. Do that. Absolutely. Absolutely.
And I also think, this is another thing that we found, if you sort of focus in the wrong area with your prioritization and your improvement work, all of the gains that you get there in that small pocket, they're going to be eaten up by the waste and the chaos that you have throughout the rest of the process. So, don't do that. Find out where your biggest constraints or your biggest bottlenecks are and make sure that you're improving those so that the overall system can get better.

Ganesh Datta:
I'm sure this differs from organization to organization, but if you had to pick your top two or three places, like if I'm managerial leader and I say, "Just tell me where I should start or which part of the values you might focus on," what would you recommend people do? I mean, ideally people go the end-to-end lifecycle, like you said, from the line of code to a customer saying thank you. But if I have one hour I'm just going to focus on one bit, where do you recommend, especially in an AI-first world?

Nathen Harvey:
I would definitely start with ... Well, the place I would start if I were your leader, if I was that leader is I would go to the team and I'd say, "Tell me where it hurts the most." You probably don't even need a map. The team knows where it hurts the most already. And then the next step would be to build out the map. And if you're doing that, you only have a limited amount of time. I would, and this is a bias that I have with DORA. DORA has this sort of center of gravity around software delivery performance. So, I would start with line of code gets committed to the repository, line of code lands in the production environment. Let's map that part of the journey, that part of the process, because there's often waste in that part of the process.

Ganesh Datta:
That makes sense. And especially if we're talking about more throughput and more code, that is probably a part of the process you're going to see the most impact. If you can unlock that part of the SDLC, you're going to see more throughput naturally flow through as a result. And I think that the takeaway for listeners as well, and let me know if you agree with this, is a lot of the investments are going to look almost unrelated to AI in a lot of ways. It's going to be things like improving your CI/CE and your build process and focused on ... Even if you have very basic things like if you haven't set up a linter, you don't want to give your teams more nitpicky things to review when really they should be focused on the code that AI is writing and thinking about the actual impact of that code versus focused on small things. It feels like very non-AI-related things that actually are going to pay the highest dividends within that part of the stream.

Nathen Harvey:
I would agree with that for the most part. I think it's important that we remember why we're building, not just what we're building with. We don't get together and say, "Man, if only everyone used Visual Studio Code, instead of Ganesh over there who won't give up EMAx." Stop, that doesn't matter. And you're not going to improve necessarily the flow by changing which particular tool. The editor choice should disappear into the background. I think the use of AI should disappear into the background, but we should be focused on, how do we improve this part of the system? Maybe it's with AI, but maybe to your point, maybe it's with something else that we just haven't prioritized or haven't brought in to our toolbox yet.

Ganesh Datta:
Yeah. One of the things that we're seeing in that bucket is not in this part of the SDLC, but the importance of SLOs, I think is becoming more and more important. And I think the reason for that is, like you said, now losing sight of why we're building things, it's for the customer. And so, if we say regardless of the productivity gains we're going to get or the throughput or whatnot, the SLOs in some way kind of capture the essence of our contract with our customers and our uptime and all those things. And so if we can get better about defining, "Hey, here's what we're going to hold ourselves to." The idea that SLOs give you a point to say, "Okay, we're going to pause and go back and focus on tech debt or the things that are leading to this." It feels more important than ever that you have those kind of guardrails that tell you to like, "Hey, it's time to slow down."
Because I think a lot of the times people are really focused on, "How do we speed up? How do we speed up?" Now, the importance of things telling us, slow down, is becoming more and more important than ever before.

Nathen Harvey:
Yeah, absolutely. And agreeing on what I call the consequences of overspending your error budget in advance. In a time where it's not, "Oh my gosh, everything has gone terrible. We got to fix this right now." No, we have to agree to what are we going to do when we do violate those SLOs? How are we going to respond? And let's come up with a pact, like an agreement. And then when we do overextend our SLOs or we miss those SLOs, and let's be clear, we will miss our SLOs. It's just part of how work happens, right? But we agree that we're going to hold steadfast to that pact, to that agreement of how are we going to change in order to recover that SLO that we lost to get back to a good working stake. Whether that's shifting some of our capacity instead of from building new features to building better feedback mechanisms or slowing down releases, whatever you agreed to, stick to that agreement.
And this is also why it's important that it can't just be say the SRE team or the software engineering team that writes those SLOs and comes up with the consequences of the error budget. We have to do this collectively as an organization, so product has to be involved in those conversations.

Ganesh Datta:
Exactly, because it implicitly captures the impact on the customer. So, we could sit here and talk all day long about the productivity improvements or instability or whatnot. And when we say instability, really what we mean is some sort of thing that is causing an issue to customers or some downstream thing. And so, having SLOs that capture that and tell us, "Hey, it's time to slow down," I think is important for exactly those reasons. One of the things that-

Nathen Harvey:
Also, sorry Ganesh, the other thing about SLOs that I think is so important that oftentimes gets glossed over is that it helps you sort of level set the right level of reliability. An SLO doesn't say 100% available, 100% fast all the time. No, we use SLOs to figure out, what's the acceptable level of reliability that keeps a customer, a typical customer, happy enough that they keep coming back and keep using the service? And that's very different than striving for five nines across every measure that you have. These are two very different things. We have to take that user-centric focus and give them just enough reliability. That's a perfectly good goal.

Ganesh Datta:
And I think a lot of that comes back to the fact that people don't internalize that the point of burning down your error budget is like a pause. If we're now meeting that SLO, we have pre-agreed that we are going to do X, Y, and Z to get us back to a healthy state. And so, if you don't treat that part of the process with the intentionality that it deserves, then sure, let's do five nines and we're out of our error budget in 30 seconds, but who cares? We'll just steer at this dashboard every day and be unhappy with it. But if you take the part about, we're going to stop and fix this thing seriously, then you will naturally end up in a state where you're saying, "What is the bare minimum? What is acceptable?" And we will do that, because we need to find those trade-offs.

Nathen Harvey:
Yeah. And just like with value stream mapping, there's always aha moments. When we sit down and actually have these conversations and start to consider things, crazy aha moments pop up. I'll give you another quick example. There was one team that we were working with that had some data processing pipeline, and we said, "All right, well, what's the SLO for getting all of that data?" It was a batch process. "How fast does it have to be?" "Well, it's got to be done within two hours." "Okay. Is it always done in two hours?" "No, sometimes it takes longer." "How much longer?" "Well, sometimes it takes two days." "Oh, well, what bad happens when it takes two days?" "Oh, nothing. It's fine. We just deal with it."

Ganesh Datta:
Oh, my God.

Nathen Harvey:
Okay. So, your SLO should not be two hours. Your SLO can maybe be two, maybe even three or four days. Let's explore that and make sure that we're not wasting engineering capacity and other capacity on meeting a two-hour thing that could be a two-day thing.

Ganesh Datta:
100%. It just comes back to intentionality. And speaking of intentionality, one of the things that ... Actually, there's two kind of maybe disconnected things in the report that I want to talk about. And I think there's probably some overlap in the two. The first one was about trust, like the trust and productivity paradox that you guys talked about where some teams really trust it, some teams really don't trust AI. Some teams are kind of in the middle, they're very neutral about it. And then you talked also about the different archetypes of teams and you had your teams that were high performers, teams that were good at adopting these new technologies and whatnot. Was there a connect between those things? Are teams that are those high achievers, are they also high trust? How do those things interplay? My gut is that there's probably something between those two categories that people should think about.

Nathen Harvey:
Yeah, it's interesting, because we do see nearly universal adoption of AI and very high levels of individuals saying, "It's making me more productive. It's making my code base have higher quality." But about 30% of our survey respondents said that they have little or no trust in AI. And this kind of reminds me, first, I think 30% is probably about a healthy number. You shouldn't expect to take whatever the AI gives you and ship it, whether that's code, or the email that you want to send to your boss or to a customer, whatever that is, we should verify. We should have good feedback mechanisms in place to validate that this is what we actually want to ship. So, I think that's healthy, first of all.
And then yes, we did look at these different team level profiles. One of the interesting things, we have teams that we labeled things like the legacy bottleneck or the harmonious high achievers. One of the things is that AI adoption was kind of constant, nearly constant across those. And so, this really tells us that it's not about how much do you trust AI or how much do you use AI? That's all there, but we're still seeing varying levels of performance outcomes together with that AI, which is really what led us to dig into, well, what are those capabilities and conditions that go along with AI adoption to really drive those outcomes like team performance and product performance that we care about? And that's what led us to this DORA AI capabilities model, which are the seven capabilities and conditions that we see are amplifying the effect of adopting AI in a positive way on those outcomes that you want.
So, AI by itself can help with something like, I don't know, software delivery throughput. But when you combine it with the capabilities that we identified, software delivery throughput gets even better. So it's really important, I think, that we take that holistic picture.

Ganesh Datta:
Yeah. And I think like we were talking about earlier, a lot of those AI capabilities were ... Or a lot of the capabilities that lead to AI adoption in a good way are, again, things that are very much not AI-centric, aside from things like having a strong stance on AI and internal data and things like that. But I think everything else was pretty ... Those are things that people should be doing anyway. Is that fair to say?

Nathen Harvey:
Oh, yeah. Strong version control practices with or without AI, please have strong versions of all practices. A user-centric focus, with or without AI, please have a user-centric focus, working in small batches, having a good quality internal platform. All of these things are sort of amplifying the effect of AI. And even something that is AI specific. You mentioned a clear and communicated stance. Well, being clear and communicative about your policies, regardless of whether or not they're AI, that's valuable for your organization. We also talk about having a healthy data ecosystem and giving AI access to that internal data. You want all of your systems to have access to internal data where it makes sense. You want to have a healthy data system or healthy data ecosystem. That is important, regardless of whether or not you have AI in the building. So yes, I think a lot of the capabilities that we talk about, while some of them have sort of an AI slant to them, if you squint a little bit or genericize them a little bit, they're definitely valuable, regardless of what technology stack you have.

Ganesh Datta:
Going back to the trust paradox, do you find that teams that are doing really well on those capabilities have a different kind of perception of AI? Do they trust it more because their processes lead them to trust it more?

Nathen Harvey:
Yeah, I would say that we don't really know enough. We can't see that in our data. I think that if I recall correctly, that the trust levels are pretty evenly distributed across those teams as well. So, it's not like the legacy bottleneck doesn't have a high level of distrust relative to say the harmonious high achievers. I think they're about the same.

Ganesh Datta:
That makes sense. What's interesting in the adoption of AI that we've seen some of our customers is, every single customer has adopted it. Our legacy traditional enterprises are being much more intentional about it in some ways. Would you recommend that approach to people? I guess one of the ways we're seeing people do it is we're going to roll it out to 300 people and we're going to see how it goes, and then we're going to open it up to other teams. Some of our customers have done a thing where you have to be kind of going back to the idea of investing in your foundations. You only get access if you can prove that your foundations are good and then you get access to it. Would you recommend that approach? Would you recommend more of a spray from the hip and fire from them and get access to everyone? How do you think these organizations should be thinking about granting access to coding assistance?

Nathen Harvey:
Yeah. With that, I'm going to use one of the words that you've said repeatedly in this podcast, it's intentionality. We've got to be intentional about it. And that intentionality can be across a wide spectrum. Honestly, today, we're recording this in November of 2025. It is still very early in the AI-assisted software development world. Nobody can tell you what the best practices are, because there aren't best practices yet. And so, some of that intentionality could be, "Hey, we just want to go and learn. We want to create an environment where it's safe to experiment, safe to learn. And as you learn things, that it's easy to share those lessons across the organization." That's one really great way to approach it. You could take a more conservative approach and say, "We're going to give it to this team over here," and set up specific guardrails or specific ideas around exactly how they're going to use it and try to control it.
I would lean less towards the try to control and more towards the allow for experimentation, because I think that's what's going to give you the best results. The reality is that if I come in as a leader or an outsider and say, "Here are the three things that you can do with AI," I've already cut you off at the knees. There are so many other things that I don't have any insight into where your challenges are or what things you creatively will come up with. So I want to create that space for you.

Ganesh Datta:
Are there organizations you're working with that are thinking about the next stage of this, which is, "Hey, we're actually happy with the impact that it's having and we're struggling with scaling it up. We want to do more with coding assistants." How many organizations would you say are in that bucket and what are they doing differently?

Nathen Harvey:
I would say that almost every organization I'm talking with wants to understand the future and is looking ahead, like what comes next? And I get questions every day about like, "Well, how does this change the makeup of our teams, and where do we invest in hiring and so forth?" And the reality is that everyone is asking the questions and no one has the answers yet. So, I think it is important to be asking those questions and start thinking about those things, but I haven't seen a team yet that's like, "All right, we've got this unlocked. Let's move on to the next thing." I do think that investing in all of those foundational capabilities, a good quality platform, good psychological safety across the organization, all of this is really important. And when it comes to things like psychological safety and the culture of your organization and some of the practices there, I think honestly, my biggest piece of advice is, use this AI moment to help drive the cultural changes that you actually want.
The reality is that we oftentimes talk about tools and culture as two different things, but they're inextricably linked. They reinforce and amplify one another. And so this is an opportunity for all of us to say, "Hey, there are practices or ways of working that we've always wanted and we've struggled with." Let's use AI as the excuse for why we have to go start doing those things. Let's use AI as the reason why we have to collaborate more. Let's use AI as the reason why we do more pair programming or more test-first programming. Let's use AI as the reason why we do better documentation. All of these things are going to pay off and they're going to help change the culture of your organization, hopefully for the better.

Ganesh Datta:
I love that. I wish we had touched on this in the beginning of the podcast, because I could probably talk. We should record a whole episode just on tools and culture. And I think that's a topic in and of itself, which I love.

Nathen Harvey:
Totally, totally.

Ganesh Datta:
We're starting to see that in our customers as well. I don't think people are even realizing that they're changing their culture in some ways, but they are reshaping their culture around AI. One of the conversations I had recently was, "Oh, we're expecting our product owners to do a better job of writing out the specifications and the expected behaviors upfront, because that allows coding assistants to do a better job of implementing them." I'm like, "Okay, well, actually you've changed your culture, because you're forcing, again, more intentionality about what is the change and what is the behavior and what should it do and how should it impact our customers? And naturally you will lead to better outcomes." So sometimes I wonder, I'm like, how much of the better outcomes we're seeing as a result are actually people reinvesting their own processes because they're handing it off to a machine to do things and they're just codifying things upfront? There's probably a whole investigation we could do on that piece.

Nathen Harvey:
Yeah, but let's not, because if it's making it better, great, great. Let's keep doing that. But I totally see that all the time. I was just talking with a friend the other day, a lot of senior engineers are starting to go down the agenetic path and are writing CLAUDE.md files or Gemini.md files that explicitly state, "This is my way of working." And I find it ironic that these senior engineers would never write it down for a junior engineer so that they could learn, but they will write it down for a bot so that the bot gives them better outputs. But the side effect is they've written it down and now junior engineers can take advantage of that knowledge. We've made that knowledge much more explicit for everyone in the organization, including the AI robots that are writing code for us. So, I think it's awesome.

Ganesh Datta:
There's so many versions of that. I've seen the number of comments where it's like, "Hey, we should do a better job of documenting things. Hey, I took a step at documenting this really complicated file, because now our AI can do things about it." Or, "Hey, we should write more tests in this part of the code base, because if AI is writing more code, it'll be easier for us to ship changes there." I'm like, "We're going back to the basics, guys. This is stuff we should have been doing all along. But you know what? If this is what pushes people to make those changes, I'm all for it." Because like you said, it kind of goes back to the initial thing about, do folks on throughput first, you focus on the foundations first? It's almost like a chicken or the egg. Most people will probably end up doing both.
I think the really interesting thing that we're starting to see is this new category of folks who have done pretty well on the foundations and even driving adoption of AI. And now they're thinking about how do we make the most of our AI tools? So it's like, okay, well, we know that the best repos are the ones with those agent instruction MD files. We know that the ones that are using automated code review tools, at least as a first pass or doing better, whatever those things are. And so we're kind of going back to the same concept of, well, we need to standardize the way we do things across all of our repos and specifically for these agents, but it kind of goes back to the same old process all over again. So that's been an interesting development in organizations that are more mature with AI coding assistance.

Nathen Harvey:
Yeah. And again, that's something that DORA has researched over the years as well. How do you spread knowledge through an organization? And that like the quick TLDR is create communities of practice, not centers of excellence. You want to create a place where information and ideas can flow organically throughout your organization. And a center of excellence oftentimes becomes this sort of group that sits up on a high tower and is disconnected from the work. So, don't do that. Create the communities of practice that allow that knowledge to spread throughout the organization.

Ganesh Datta:
Yeah. And there's so many versions of that. I mean, I think about platform engineering teams, like the ones that work really closely with their SRE and security counterparts are generally more successful than the ones that work in a silo and so on. So, there's so many different things to think about there. I think we're coming up on time. My last question for you is, do you have any hot takes that are ruminating that maybe you want to share with our listeners about AI and the state of it in 2026?

Nathen Harvey:
For sure. I guess my hot take, because DORA and I always get asked, "What are the right metrics for us to think about with AI?" Please stop measuring lines of code accepted or lines of code generated by AI. Or at least understand that all that's measuring is usage of the tool. It measures nothing beyond that. And so, I do wonder though, if we go back to, as you introduced a new IDE to your team, did you ask, "Which of my IDEs allows my developers to write more code? Is it Visual Studio Code? Is it IntelliJ, or is it VIM? Which one of these developers is the most productive?" We never ask that question. Stop asking it about AI.

Ganesh Datta:
I love that. There's so many people talking about that. All right, I love that hot tag. Well, thank you so much for being on the podcast. I really enjoyed this conversation. Like I said, we may have to talk in the future about tools and culture. I feel like that's an hour-long episode on its own easily. Really appreciate it.

Nathen Harvey:
All right. Thank you so much, this was a lot of fun.

Ganesh Datta:
Thanks so much for listening to this episode of Braintrust. If this resonated with you, do me a favor. Share it with another engineering leader who's wrestling with these same challenges. And if you want to continue the conversation or learn more about how we're thinking about internal developer portals at Cortex, reach out to us at cortex.io. Thanks for listening, and we'll catch you on the next one.