What if your next competitor is not a startup, but a solo builder on a side project shipping features faster than your entire team? For Claire Vo, that's not a hypothetical. As the founder of ChatPRD, formerly the Chief Product and Technology Officer at LaunchDarkly, and host of the How I AI podcast, she has a unique vantage point on the driving forces behind a new blueprint for success.She argues that AI accountability must be driven from the top by an "AI czar" and reveals how a culture of experimentation is the key to overcoming organizational hesitancy. Drawing from her experience as a solo founder, she warns that for incumbents, the cost of moving slowly is the biggest threat and details how AI can finally be used to tackle legacy codebases. The conversation closes with bold predictions on the rise of the "super IC" - who can achieve top-tier impact and salary without managing a team - and the death of product management. Follow the hostsFollow AtinFollow ConorFollow VikramFollow YashFollow Today's Guest(s)Connect with Claire on LinkedInFollow Claire on X/TwitterClaire’s podcast How I AICheck out GalileoTry GalileoAgent Leaderboard
What if your next competitor is not a startup, but a solo builder on a side project shipping features faster than your entire team?
For Claire Vo, that's not a hypothetical. As the founder of ChatPRD, formerly the Chief Product and Technology Officer at LaunchDarkly, and host of the How I AI podcast, she has a unique vantage point on the driving forces behind a new blueprint for success.
She argues that AI accountability must be driven from the top by an "AI czar" and reveals how a culture of experimentation is the key to overcoming organizational hesitancy. Drawing from her experience as a solo founder, she warns that for incumbents, the cost of moving slowly is the biggest threat and details how AI can finally be used to tackle legacy codebases. The conversation closes with bold predictions on the rise of the "super IC" - who can achieve top-tier impact and salary without managing a team - and the death of product management.
Follow the hosts
Follow Atin
Follow Conor
Follow Vikram
Follow Yash
Follow Today's Guest(s)
Connect with Claire on LinkedIn
Follow Claire on X/Twitter
Claire’s podcast How I AI
Check out Galileo
AI is reshaping infrastructure, strategy, and entire industries. Host Conor Bronsdon talks to the engineers, founders, and researchers building breakthrough AI systems about what it actually takes to ship AI in production, where the opportunities lie, and how leaders should think about the strategic bets ahead.
Chain of Thought translates technical depth into actionable insights for builders and decision-makers. New episodes bi-weekly.
Conor Bronsdon is an angel investor in AI and dev tools, Head of Technical Ecosystem at Modular, and previously led growth at AI startups Galileo and LinearB.
00;00;00;00 - 00;00;27;16
Unknown
I look at the speed at which people are able to build things, and I think velocity will be a massive, massive differentiator. I think teams have got to get on this train because they're going to be competing on the ground feature for feature capability, for capability. And if you don't embrace this, I just cannot imagine you you don't get left behind.
00;00;27;18 - 00;00;47;14
Unknown
Welcome back to Train of Thought, everyone. I am your host Connor Bronzed. And our guest today is Clare Vo. Clare is the chief product and technology officer at Launchdarkly. The founder of Chat Pod and also the host of the how AI podcast. Clare, welcome to the show. It's great to have you here. Thanks for having me, I appreciate it.
00;00;47;16 - 00;01;13;13
Unknown
I love that you've got this diverse perspective and approach across AI. Not only are you diving deeper with other folks and exploring and creating content. Not only have you founded your own AI enabled application, but you're also building AI products as a leader at a scaling company like Launchdarkly. And I have to imagine this gives you a varied perspective across the space, a unique vantage point event.
00;01;13;15 - 00;01;34;27
Unknown
And that's exactly what I want to explore with you today. From the incredible product development velocity that AI enables to how it changes the equation for risk, and why leaders need to be building their own vibe coded apps just to keep up. And by the way, I'm doing a terrible job of that. So I'm feeling held accountable by this conversation already.
00;01;35;00 - 00;01;56;18
Unknown
So let's start, though, with this demand that we're seeing around a genetic AI, something we've talked a lot about on this show, but I think it's really important to continue to dive into and, in particular, I think your perspective is one I want I want to understand deeper with Chat Pod, you're seeing a significant demand for more a genetic AI experiences over copilot like models.
00;01;56;20 - 00;02;28;15
Unknown
From your vantage point, what are the key drivers behind this growing hunger for agents within AI that perhaps didn't exist six, 12 months ago? I think there's a couple things that probably can contribute to this rise, demand for more energetic experiences. I think the foundational one is people are just much more comfortable with the concepts of generative AI and AI products, and so they're able to wrap their heads around the things that were keeping them from adopting any AI products, let alone a genetic ones.
00;02;28;15 - 00;02;53;22
Unknown
You know, where's my data going? How are these responses being generated? Who can I trust? Is my data secure? Does it have enough context? Is it going to hallucinate if you don't have that foundational understanding of how these products work? You're certainly not going to embrace a form factor of the product that's a little bit more independent, a little bit more asynchronous, and a little bit more connected to your your data and your products.
00;02;53;22 - 00;03;14;12
Unknown
And so I think one baseline comfort and understanding with AI definitely helps here. And then I think the second thing that we're seeing is working with AI tools is still work. You still have to like sit in front of some sort of tool and figure out, what am I going to prompt this thing? What can it do for me?
00;03;14;12 - 00;03;34;20
Unknown
What can it do? Well, it's still really coming from a push from the human in order to get these outputs from AI. And I think, you know, what people are really wanting when they come to a genetic experience is, is I want to discover what you can do for me. I want you to be able to do a broad set of tasks for me.
00;03;34;20 - 00;03;55;09
Unknown
And once I set you off on a path, I want you to take it to its logical conclusion. And so I think there's this, form factor or UX of the agent tech experience that it's actually a little bit easier to adopt than other kind of like AI products that I've seen. And so I think it's a little bit of the user experience as well, makes a difference.
00;03;55;12 - 00;04;19;09
Unknown
While the user experience may be superior and I think will continue to become superior, there are considerations on the back end as far as creation that you therefore have to take on. You've, I guess, shifted left the challenge instead of having it be in the copilot experience. It's earlier on in the development process, in the guardrails, in how you evaluate and improve those systems.
00;04;19;11 - 00;04;39;24
Unknown
And yet we're still in the early days of AI adoption. Yet this may not be the form factor that sticks long term. We don't know yet. Can you talk about the characteristics that you're seeing? Leading edge teams that are comfortable offloading more and more tasks to low supervision AI like agents, like what are those characteristics those teams have?
00;04;39;26 - 00;05;13;15
Unknown
Yeah, I think one is risk tolerance. And I don't think this has to do exclusively with AI. I think there are company cultures that are just much more risk tolerant and have much more of a embedded experimentation mindset. So if you're coming to this age of AI with a culture where you know what the appropriate level of risk to take, you're willing to let people experiment and fail and optimize, and you want to work towards outcomes, and you're willing to tolerate learning and sort of like that growth path.
00;05;13;17 - 00;05;34;27
Unknown
Through that process, you're going to be set up really well because foundationally, the number one thing I see with leading, leading edge adoption AI teams is they're just open. They're just more open when someone says, hey, can you try this AI tool? They say they already have. It's so cool. Or like, I can't wait to do it as opposed to other companies where they say, so we're going to isolate that.
00;05;34;27 - 00;05;54;17
Unknown
We're going to work in our codebase. Never, never, never. And you spend so much time up front, objective hand objection handling and not enough time actually figuring out what works. So I think that risk tolerance is really important. And then I think there really is, operational maturity, at least at large scale. You know, we're talking about startups.
00;05;54;20 - 00;06;21;19
Unknown
Startups have the advantage of they're small, their codebases are relatively less complex. Their customers are probably fewer and lower risk. They can do a lot more. But if we're talking about any company of scale that's adopting AI in a meaningful way, they're operationally mature in a way that their engineering practices already protect for quality, already protect for velocity are all ready to set up to make engineers productive.
00;06;21;22 - 00;06;45;28
Unknown
When you have that foundation, it's very easy to add in a genetic engineers or AI IDs or any of these sort of automations that then accelerate because the same practices as the same technologies, the same operations apply to when you're adding these tools in your stack. Because when you're adding, you know, engineers to your stack, when you're adding new tools, and it all applies.
00;06;45;28 - 00;07;14;16
Unknown
And so I do see this combination of risk tolerance, operational sort of maturity and then honestly tops down or at least centralized focused and accountability on adoption. So somebody has to say it's my job to make sure we become the leading AI, you know, powered engineering organization. And we've seen this lately at the CEO level. You know, we've seen all these like CEO edicts.
00;07;14;16 - 00;07;43;16
Unknown
So we're now this AI company, you know, that needs to happen somewhere in the engineering organization in Launchdarkly. It started a little bit with me. Bless them. They're stuck with me as a third. Yeah I native whether they like it or not. But honestly the the shift that made the biggest difference in engineering adoption was we made one of our most senior tenured engineering leaders and engineers the like, sort of like Azar at, at in the engineering organization.
00;07;43;16 - 00;08;22;00
Unknown
And that centralized accountability and week to week execution just makes the practical adoption of these tools a lot, a lot easier. So how did you decide who the right person was for that? Azar. And what did you do to enable them to be successful? You might be surprised by the answer, which is I picked the AI skeptic. I like that I picked, you know, not I wouldn't say like the AI skeptic, but certainly somebody who wasn't, as naturally inclined as maybe I am to be bullish on the AI opportunity.
00;08;22;00 - 00;09;05;24
Unknown
I picked somebody who had won a really good, robust sense of architecture and code base, somebody that I knew kind of knew everything about our core. Monolith knew how our engineering organization worked and understands, understood. You know, where there are dragons, as we say. And so somebody with good foundational understanding, our code base is really important. Secondarily, somebody senior enough with enough internal tenure to both have credibility in the organization when they say, hey, this really works or this doesn't, we trust that person, but would also be able to sort of foretell and avoid some of the big like landmines in adopting AI.
00;09;05;27 - 00;09;22;25
Unknown
And then the, you know, the third, the third category is I really wanted to give this person has done so much for the company and their wonderful leader and their wonderful engineer. I wanted to give them a win, honestly. So part of it is, part of it was motivated by them having the right attribute. It's to be the technical leader for this.
00;09;22;25 - 00;09;44;09
Unknown
And part of this was a career development opportunity I wanted to give them, saying, you've been here for a while. You need to figure out what your next level, next wave, the impacts are going to be. Congratulations. I am plucking you the plum prize of you get to put AI all over your resume and you get to be the leader there.
00;09;44;09 - 00;10;09;08
Unknown
And so those were kind of the three reasons why we think this person's act. Thank you very much for doing it. And it's been exceptional because he's not, you know, overly enthusiastic. He's not going to say everything works, but he's also opened up his mind to what does work, what does it. They identified some technical places where we can invest to make the adoption easier, and we know who to go to for questions.
00;10;09;08 - 00;10;31;23
Unknown
Zach, how do I get access to X? Zach how do I figure out how to get this product to work with? Why? We have a centralized person that makes it easier for the team to kind of understand to where to go when they're trying to adopt these new technologies. Do you think leaders broadly need to rethink their approach to innovation and risk with AI, given the opportunity that's ahead here?
00;10;31;28 - 00;11;00;24
Unknown
Yes, 100%. I just you know, the thing that keeps me up at night is some hot upstart company with a clean code base who's just ripping through features with I like it, really. It really terrifies me. I look at the speed at which, you know, people are able to build things and I think velocity will be a massive, massive differentiator.
00;11;00;27 - 00;11;33;16
Unknown
And I think people are going to wait too long. I'm massive leap like super paranoid about this. And so I think teams have got to get on this train because they're going to be competing on the ground feature for feature capability, for capability. They're going to be competing for talent. And if you don't embrace this, I just cannot imagine, you you don't get left behind, especially when these types of teams can access massive rounds of a funding.
00;11;33;16 - 00;11;58;00
Unknown
You take the combination of, like, I native super high velocity funded. That makes me paranoid. And so we have an incredibly healthy company, great brand, amazing engineers. Like imagine if we had the guts to say we're just going to operate completely differently. We're going to embrace this. We're going to go as fast as possible. So we're in build some really cool stuff.
00;11;58;02 - 00;12;24;23
Unknown
I think the teams that can embrace that and say that's possible and that is for us are really going to remain relevant in this next stage. And I think folks that don't maybe will not. I will ask about one point of what you said here, because while I broadly agree with you and think any team that's not at least attempting to adopt AI is likely to be left behind, you did mention a team that has a clean code base, and it has a high velocity of AI development.
00;12;24;23 - 00;12;48;11
Unknown
Then I'll just ask about that. Given that that's one of the, key areas that I think many skeptics will will push on. Yeah, yeah. I mean, look, it is very different. You know, you mentioned at the beginning I have this diverse, you know, perspective. I get two things. I get my darling petite little chat pod repo, I you know, written every line of code with with cursor with it.
00;12;48;11 - 00;13;19;08
Unknown
I like, I know that whole thing by the back of my head. It's not that big and it's built on a modern stack, and it's built on a stack that AI knows how to write for. It's just too dang too late to work it. It's so, so great. And then I have launched Berkeley, which is an amazing, scaled, proven, production grade enterprise product that has been built over the course of ten years that, you know, maybe wasn't built, optimized for the languages and frameworks that AI seems to do the best with.
00;13;19;08 - 00;13;43;08
Unknown
That is complex, that, you know, has some tech debt. Those are very different, very different situations. Now, what I think people underestimate, though, in the situation where you have a legacy code base, is a couple things. One, the ability for AI to accelerate cleaning up your gnarliest parts of tech that you don't like. Your front end framework used to be.
00;13;43;08 - 00;14;10;23
Unknown
I promise you, I've done this 2 or 3 times 18 to 24 months of ripping out whatever JavaScript framework decided to deprecate that year and like, move to the new hotness. I've done that almost every top of my career. This makes it a lot easier. I know somebody who, at their startup just decided we're ripping everything out or replacing it with with tailwind and with Shazia, and and we're just gonna have these, like, beautiful, simple components, and I don't care.
00;14;10;23 - 00;14;35;16
Unknown
Let's just rip and replace everything. And now they can move very fast. So one, I think you can replatform some parts of your product a little bit faster and clean up some tech. That two is you can do purpose built things to make your repo better to work with. For AI. And the bonus of doing those things is you make your repo better to work with for everyone.
00;14;35;18 - 00;15;00;17
Unknown
If an agent is having trouble running your codebase locally, an intern is going to have trouble running your codebase locally. A senior engineer is going to have trouble running your codebase locally, like if it takes three days to get your local environment set up. That sucks for AI and that sucks for humans. And if your codebase is not well documented, that sucks for AI, that sucks for humans.
00;15;00;17 - 00;15;28;21
Unknown
And so I do think one of the most effective tactics I've heard my peers do, that we also try to embrace and launchdarkly is do a spike on how can we make this codebase better for AI to work with. And that actually pays out pays our dividends in terms of efficiency. And maybe the last thing that I would say is a lot of people, a lot of people say I'll never work.
00;15;28;21 - 00;16;06;19
Unknown
I'll never work in our disgusting, disgusting old repo. Just it's impossible. And when I say that sounds like a you problem, but to like, have have you tried like, have you given it a real go? Because I think people maybe try one PR with cursor or they try one task with David and it doesn't do well and they don't learn why it doesn't do well and they don't take the accountability of like, maybe my prompt was bad and then they give up as opposed to what I think we did a launchdarkly, which is I said, just go experiment and report back what works, what doesn't.
00;16;06;19 - 00;16;38;02
Unknown
And for every one total dud, we got two helpful wins. And on the net that's that's positive. And so I do think there are things you can do in like a C or more complex code bases to make it work. Both technically and operationally. You just have to give it a go. I completely agree, I think there's a broad expectation misalignment based off of some of the marketing of AI around this is just going to magically solve your problems, and it actually does solve a few of them, but they're still set up work that needs to be done.
00;16;38;02 - 00;17;07;13
Unknown
There's still iteration that you need to do to make sure that your infrastructure is in the right place, that you have the context provided to you, Devin or cursor that you may need for it. And if you spend the time to do that, the dividends that you will receive are fantastic. And I think what you're seeing out of all this is essentially that leaders are still underestimating the opportunity with AI and underestimating the risk involved with sticking with a human only engineering plan.
00;17;07;16 - 00;17;27;17
Unknown
Yeah, I mean, what I think is people are so worried about what if I ships a bug that they decide not shipping anything is better? And that is just such a backwards way from a business perspective to look at development. We've all shipped bugs with humans. Like, yeah, we should we should bugs with humans and we ship them quite slowly.
00;17;27;17 - 00;17;55;25
Unknown
Yes. And so I just think people really underestimate the opportunity cost of moving slower than, than they could. And I also think leaders really underestimate how irrelevant they will become if they do not know how to do this in large organizations. You know, as I said, I gave, the kind of engineer that is leading our AI initiatives at launch track.
00;17;55;25 - 00;18;25;24
Unknown
I was like, I gave you the career gift. This is like at Launchdarkly and beyond. Congratulations. In 2025, you led the transformation of an engineering organization from one that operated in a legacy way to one that's operating an AI enabled way. You have all the learnings. You know what works, you know doesn't. You have the success stories. If you, as a CTO, VP of engineering, engineering manager, staff principal, engineer are not developing those stories for yourself.
00;18;25;24 - 00;18;44;11
Unknown
I guarantee you, in two years when you go into interviews, you are not going to be at the top of the list. If you say, are we just? We never really worry about that. That wasn't going to ever work for us. Or I did a couple things, but I don't really know this tool super well. You are just not going to have the hard skills to do the job.
00;18;44;11 - 00;19;04;10
Unknown
And I really do think it's a hard skills issue right now. It is a new type of engineering skill you need to develop. Can you say more about that? How to like give me some more depth on into like what does it hard skill look like? Yeah, I think I think there's a couple things. So one, you know, using all the tools available to you.
00;19;04;10 - 00;19;30;28
Unknown
So I think coding is going to go to AI enabled Ides, just as it's just better. It's a better way to live now happening. It's already happening. And so if you do not know how to manage context effectively, prompt set up rules, access MSPs, all those things. If you have not set up your toolkit for how do I use this new set of engineering tools?
00;19;30;28 - 00;20;13;07
Unknown
Well, and to an advanced degree, you're not going to have the hard skills to be a software in the software team in a couple of years, because you will just not know how to use the toolkit. And so I think that's one very specific example. As a engineering leader, if you do not know how to integrate and operationalize the use of coding agents or automations, either in your DevOps or in your engineering operations, if you don't have a sense of how those things have or have not increased overall velocity in your team, if you have not spearheaded initiatives to, as we talked about before, make your code base better for the entire organization to
00;20;13;07 - 00;20;41;08
Unknown
work with. Using AI, you're going to be sitting next to and interviewing against people who have done those initiatives, who have figured out those operations. And so, again, I think this is just as we look at how we evaluate the progression of engineers from, you know, SWE one all the way through principal engineer, if we think about what it means to be an engineering manager, Director of engineering, VP, CTO, I think we need to add AI fluency into that list.
00;20;41;08 - 00;21;05;01
Unknown
And I think people need to come up with a very specific list of skills that they evaluate for, both in terms of promoting people and hiring new folks. I am already seeing in some of the hiring discussions I'm in where people that would be fantastic candidates now two years ago are not as well rated because they haven't dove in into AI feet first.
00;21;05;06 - 00;21;38;18
Unknown
Yeah, and I expect that to happen even more so over the next couple of years. And not just in technical roles and marketing roles and sales roles. Yeah, but if you are not embracing this technological revolution, you are at a risk to be left behind. And therefore, I think what you're doing at Launchdarkly of building this culture, cultivating this culture where the risk reward of velocity is seen as a net positive and where AI is embraced and experimented with, is so important.
00;21;38;20 - 00;22;02;14
Unknown
So, you know, you mentioned appointing an AI. Eyes are helping them to transform the organization. What other steps have you taken to really create a culture where everyone within the R&D teams is enabled to take this on? Yeah, we do a couple of things. I think the first thing is very tactical, which is you have to get finance and security out of the way and by out of the way.
00;22;02;14 - 00;22;19;18
Unknown
I don't mean you don't go through finance, security. I mean, you have a very simple framework for evaluating tools and getting budget for them. And yeah, I established very early on we're going to spend some money on AI. It's going to be totally net positive. I know in my in my soul. And we just got to figure it out.
00;22;19;18 - 00;22;42;27
Unknown
And then infosec I need like a fast turn evaluation. And I need you to be cool. Like be cool, be risk aware. Do not put our customers at risk. Do not break any of our contracts or compliance codes. But like otherwise, I got you got to be cool. Like we got to be able to try stuff. And so I think having those two teams deeply aligned to this being something we're going to do is great.
00;22;42;27 - 00;23;04;18
Unknown
Luckily, we had no friction, friction there. And in fact, the finance teams are always excited because they see the the potential efficiencies gained with these, these kinds of tools that that's the first part. Then I really believe in this building public culture. When you're trying to adopt AI. So we created this slack channel. It's called Project Building with I got like 200 people in it.
00;23;04;18 - 00;23;31;04
Unknown
And you just every time you do something with AI, when it works, when it doesn't related to work, not related to work, dump it in the channel so people can see, hey, I did this PR with cursor and it totally blew up. Blew me away. I built all my tests for me. It was super happy. This is great to, you know, public chats with with Devin or another agent were like really, really ate it on this one.
00;23;31;06 - 00;23;47;07
Unknown
And everybody's like yelling at the agent to go to sleep. And it's very, very funny. So we just put it all in public. And the benefit of putting it all in public is one, you normalize it. You say, this is not something we hide or we're shamed of, or that is wrong or is not allowed. It's all open in public.
00;23;47;09 - 00;24;08;11
Unknown
Two you get this, nice. Like learning across the audience. It's the best way to socialize and socialize. Learning is across the organization. So I love that that public channel. It's my favorite channel. It's super fun. And then another thing that we do kind of related to building a public is we have kind of this, like AI Friday Power Hour.
00;24;08;13 - 00;24;40;19
Unknown
It's basically like a twitch stream of internal people using AI tools. So we all get on Ted in Morning Friday. Everybody's in a good mood because it's ten in the morning on Friday and, you know, 2 or 3 people try something live with AI or show an AI workflow that that works for them. And so that is also something where you can just kind of like look live and watch them in our own codebase, try to figure things out, explore new tools, evaluate the quality, get some like champions out there.
00;24;40;19 - 00;25;02;21
Unknown
And so I found that to be another really effective tactic. I like that a lot. And I have to say, I really enjoyed the various vibe coding live streams that I've had the opportunity to watch. There was one of Microsoft Build a few weeks ago with, Brendan Burns and the team at GitHub that I really enjoyed, where they're just like, oh, we're just going to build this reminder app for my family that I want to do.
00;25;02;21 - 00;25;28;13
Unknown
Let's just spend two hours and kind of knock it out. And, you know, a quick vibe code session. And I think seeing others, you know, essentially pair programing with also an an AI ethics with pair programing also with an AI enabled you know, approach is so valuable to give the context of how how others are doing it and to to to build in public and share these learnings.
00;25;28;13 - 00;25;55;19
Unknown
But I know it can be hard. There's often hesitancy. People are are nervous about not being as good at something initially or not wanting to to show things off in public. Are you finding more hesitancy with maybe senior level engineers who, are like, oh, I've been doing this for years, and I know what I'm doing. Or or our junior level engineers nervous about not excelling where things sitting, you know, it's hard.
00;25;55;19 - 00;26;18;25
Unknown
It's hard. My heart wants to say, like, you know, you get old and curmudgeonly, you get stuck in your ways, and you get a true for me, some paranoid. And, you know, the more senior you are in your career ultimately like, the more it becomes your problem. If something goes wrong of course, my directors of engineering are, you know, a little bit less, risk tolerant for this because, you know, who gets page in the middle night?
00;26;18;25 - 00;26;42;26
Unknown
Who gets yelled at when we have a sub zero? Our directors of engineering. Like, of course, because accountability rolls, rolls up. And so I think they're appropriately, not skeptical, but just, you know, risk adjusted for, for some of this stuff. That being said, you know, I do think across the board there does still exist AI hesitancy for a couple reasons.
00;26;42;26 - 00;27;02;23
Unknown
One, as I said, you're asking people to learn a new hard skill, and people just do not have time to learn anything new. Like, okay, I could spend two hours spinning up this agent and installing a new idea and I like them and blah blah, blah, blah, blah. And or I could just like knock out this PR, which would I rather spend my time on.
00;27;02;23 - 00;27;27;02
Unknown
And so it's like all initiatives, you have to have the time to carve it out. Two AI is not one shot 100% totally accurate all all the time. And so of course you are going to get these instances where you work with AI and you get you get bad outcomes. And that to me is expected. It's cost of the game.
00;27;27;05 - 00;27;49;23
Unknown
I think it nets positive, but that can be a real detriment to to adoption. And then I thought the last piece that I found really interesting is stylistic, right? There are both individual coding styles as well as organization wide best practices and styles that teams have just gotten used to. This is how we writes tests. This is how we document things.
00;27;49;23 - 00;28;11;05
Unknown
This is how we do our front end. And when an AI says, I could do it, I could do the same thing. I'm just going to do it differently than you. Like people get frustrated. So I think there's a lot of reasons for folks to be skeptical. And then I do think leaders have this really challenging line to walk, which is, look, we have to get more efficient.
00;28;11;05 - 00;28;25;07
Unknown
We have to get more efficient because the market is getting more competitive and sucks to be us. But that just means we have to do more with less. And I think that has been the case for several years. Nobody is saying like, I have way more headcount than I used to, and everybody's saying like, just hire to solve your problems.
00;28;25;07 - 00;28;54;15
Unknown
No one saying that. And so I do think there's this reality that there is this efficiency, you know, program to some of this. And when you say that allow people say you're replacing my job with AI and I don't like that. And so I do think it's very complex. You have to have a very healthy culture in order to, like, put your arms around this, make people feel like it's part of developing both their personal career as well as the value of the company, which benefits them from a financial perspective.
00;28;54;18 - 00;29;08;05
Unknown
And so I think there are ways to get over to over the heads and see hesitancy. But you have to be really precise about what the hesitancy is, address it has on, and then kind of say what we call like, say, the stinky fish in the room, which is people are afraid they're going to be replaced with AI.
00;29;08;05 - 00;29;20;22
Unknown
People are afraid that they're just going to be asked to grind out more work and more hours and more features and more and more, more, more, more with less, less, less and less. If you can say those things and you can address them, you can hit them heads on and then hopefully get over some hesitancy and get back to building.
00;29;20;24 - 00;29;49;03
Unknown
I love it and I've also heard you describe, junior engineers with AI as, perhaps a loaded gun. Can you expand on that a bit? Yeah. You look, I love them. Give me all day. Like a kind of junior early career engineer who's all in on AI, who knows every prompting trick in the book, who has tried every coding open source coding agent before you even heard of it?
00;29;49;05 - 00;30;09;28
Unknown
Who is, as I very gently say, like, too dumb to know better in terms of, like what? They can bite off them, what? They can give it to me all day. You need those in your team. You need big early career energy because, you know, sometimes you get some magic out of that and it keeps the rest of the organization on its toes.
00;30;10;01 - 00;30;40;25
Unknown
And, for folks that are early in their career that have those attributes, the advice that I would give to you is you have been given an incredible opportunity. And it is also wise to know what you don't know is just like super wise to know what you don't know. And so if you can go in and say, you know, I was bored last night, so I built an entire MCP for our app, I want to put it on our public repo and, you know, but I'm not sure I handled auth right.
00;30;40;25 - 00;31;01;08
Unknown
Or is this going to be maintainable for the labs team or, and like just knowing what you don't know. So you don't come to these, you know, code reviews or come with these proposals without a good sense of where you need to look around the corner, where you need advice, where you can learn, you're just not going to do well.
00;31;01;08 - 00;31;17;26
Unknown
I mean, I've I've heard plenty of my peers who have hired that sort of like cracked AI engineer who in an interview is like, I can do this and that and answers the questions well, and you're fine if they use cursor because it's great, and then you realize they're just shipping a bunch of code they do not understand and don't care to understand.
00;31;17;28 - 00;31;43;25
Unknown
This is about a bad about situation. And so love early career, love a good AI powered YOLO and like, know what? You don't know and know. Know how to grow and grow your own skills. You have this great, engineering tutor in AI, but you also have great mentors on how to work for the team, how to work in a big codebase, how to solve scale problems, have solved technical challenges, and I think you should take advantage of it.
00;31;43;28 - 00;32;18;24
Unknown
I really appreciate you bringing this broad perspective across how to enable an AI first team and how you approach the transformation at Launchdarkly, but perhaps most interesting for me is how you've been solo building and I start up on the side, chat pod, and you're moving at a velocity that would seem impossible to someone who was trying to do this while also having their, main job, a couple of years ago.
00;32;18;26 - 00;32;46;09
Unknown
And, you've said that everything you think of, you, you build in a week, you don't really have a product roadmap because, hey, I'm you're shipping. What's that experience been like? I think it is such an important experience. Again, it's probably the source of what makes me, as I said earlier. So paranoid. Like what I stay up at night and think about is like, what if there's a clear out there that is just ripping in our product space?
00;32;46;09 - 00;33;10;16
Unknown
That makes me paranoid, because as somebody who has built this myself and has a career that spans over two decades, like I've done a venture founded startup myself, I've worked at many startups. I worked at large organizations like it is different. I raise capital ten years ago to build a product, and I swear on my life I could probably build that product before lunchtime today again if I need it to.
00;33;10;16 - 00;33;40;01
Unknown
Like, it's just totally different right now. And if you as a leader, do not take a minute to really feel how different it is not can I get cursor adopted by like finance and my engineering organization? And can I get my PMS to write priorities and in ChatGPT like not that like put your hands on a keyboard and feel how different it is, put your hands on a keyboard and try to rebuild your own product.
00;33;40;01 - 00;34;02;29
Unknown
Like until you feel that moment of like, Holy moly, it is so different right now. You really are just not going to be prepared for what's what's coming next. So I think that has been the most value thing about chat PD. I tell everybody love job here. It's doing exceptionally well, better than I can ever expect, and if it goes to zero, it will have been worth it because I I've learned this lesson.
00;34;02;29 - 00;34;30;08
Unknown
So this is like my number one piece of advice to people is learning how to build something like this and what it really takes and what it really doesn't take is super valuable. Even if you remain in larger organizations and bring those learnings to your kind of career in a in a larger org, let's extrapolate this experience you've had and I'd align it to the concern that you say it brings up for you of hey, what if there is a clear out there who's rebuilding our product right now?
00;34;30;10 - 00;35;05;20
Unknown
What are the biggest operation disruptions that you believe small AI native teams will cause for larger incumbent organizations? I think, price disruption can be one of them, right? You can offer some large percentage, feature capability for some small percentage of cost that that can be one. I think perceived innovation velocity is another one. If you are just perceived at innovating at a lower pace than your competitors, whether or not your competitors are really operating at any scale in the market, does it matter?
00;35;05;22 - 00;35;28;24
Unknown
Optics. Do you have an impact? Then you're going to be perceived as, you know, a laggard company. I think that's something to really consider. And then talent attraction is another one, which is for as many AI skeptics as you have in an engineering organization, you have just as many people who want to build, you know, modern best in class engineering skills.
00;35;28;24 - 00;35;59;27
Unknown
And if your organization does not provide those for them, then they're going to go look elsewhere. Yeah, I think you're absolutely right that if you're not enabling folks to have the opportunity to learn, they will either be deeply on the side and maybe not bringing those learnings to work, maybe they will, or they're going to look for a new organization that's going to enable them, because the best engineers out there right now are fully aware of what is happening, and they are seeing this opportunity and can't afford to let it pass them by because most of them aren't retiring next year.
00;35;59;27 - 00;36;24;28
Unknown
Most of them have several years in their career, and they want to continue to be great. And many of them simply are curious and excited, if not both. So what would your advice be? As we wrap up this conversation to the different categories of of folks within their career, let's say maybe, you know, more junior engineers are getting started, leaders who are farther along, maybe their director plus level.
00;36;25;01 - 00;37;06;29
Unknown
And then the folks who are that senior at ICI to like maybe engineering manager, team lead levels. How would you advise those different groups to approach this AI movement? Yeah. So for early in career I would say, you know, embrace your your natural enthusiasm for the new and share your learnings. I think the best things that maybe early in career folks who bring into an organization is an experimentation mindset, a sort of fearlessness in trying things that, maybe will require some work on the back end, but at least can get to to prototype version and then really staying in touch with like the new hotness.
00;37;07;06 - 00;37;24;05
Unknown
What's new out there? Share it. We want to know, I think for leaders this is the the folks that I really want to speak to which is close your eyes and imagine what an engineering organization is really going to look like in five years. What is it really going to look like if you just cast all this forward?
00;37;24;07 - 00;37;41;05
Unknown
What's the shape of it? What are engineering managers going to do? Are we going to have arms? What tools are you going to have? How a software are going to be built? How are you going to attract talent, Cass forward to that, you know, five year future and then start preparing to get your organization there. Now I think that's so important.
00;37;41;05 - 00;37;59;24
Unknown
It is. Yes. You have to worry about the day to day to day, but you really need to figure out how this is all going to shake out in a couple of years. And, and get it figured out. And then those, kind of like senior I love, my favorite group. So I think you all are going to be the highest impact in this new era.
00;37;59;24 - 00;38;19;20
Unknown
Like I actually I've said this for a while now. I think this is the era of the super IC. Like you're going to be able to get so much stuff done. You really have so much impact. You won't be able to command a very high salary because you have a combination of experience and like breadth of impact, powered by tools.
00;38;19;23 - 00;38;42;25
Unknown
If you just lean in like, this is your time and guess what? Bonus to make more money, get promoted. You don't have to manage people. What a treat like what a treat you don't even have to have. What a what you don't have to do. Like, performance reviews. You don't have to deal with people's complaints. You can just like, build stuff.
00;38;42;25 - 00;39;00;17
Unknown
And so I do think this is like the era of the super I see, especially for senior ICS. I think leaders out there, you've got to figure out a path to pay them more money and give them bigger titles without forcing them to take on teams. And so I would say, like embrace that era and figure out what you want the shape of your career to look like.
00;39;00;19 - 00;39;21;29
Unknown
During that time, you mentioned something which I want to drill down on, which is are we going to have PMS in a few years? And we're already seeing this transition into AI, PMS. And obviously it's a bit of a nebulous term so far, but it certainly involves creating MVP's a lot faster. It certainly involves moving a lot faster and changing the approach.
00;39;22;05 - 00;39;42;03
Unknown
What how do you see the role of a PM evolving within engineering organizations or disappearing? I mean, I famously murdered the career of PMS at letting his Cover Buddies conference last year. With my PMS dead talk that rattled a bunch of people. Look, I think the role is going to change. I just fundamentally think the role is going to change.
00;39;42;06 - 00;40;05;23
Unknown
I think there are going to be sort of two archetypes, product managers. I think they're going to start to come from very different practices. I think you're to have like the prototype manager that is much more of this, like combo UX engineer, PM, who like defines like product experiences and can get you to a high fidelity sense of what that product experience looks like and how it needs to technically operate very quickly.
00;40;05;23 - 00;40;33;27
Unknown
I think that's one attribute. And then I think for those that maybe are not that attribute or in addition to attribute, you can have these like very commercially minded GM style PMS who think a lot more about what market am I selling into, how am I making money, what is the positioning? All the sorts of things. And so I just think this like middle ground of, like I'm the keeper of what users want and, you know, I'm a people person, dammit.
00;40;33;27 - 00;40;51;13
Unknown
Sort of like I just talk to the engineers because the engineers can't talk to the designers, and the designers don't really want to talk to the executives, and the executives don't really talk to the humans. Like, I just think that piece, it's just not a real like a real robust enough job with enough impact when you take into consideration these tools.
00;40;51;13 - 00;41;17;17
Unknown
And so I do think the product manager role is going to shift. I am building a product manager agent that I think can take a lot of those tasks off the plate, off people's plates with chat period. And then let them focus on things that I think humans are really good at talking to other humans, figuring out what they want, selling, creative inspiration, unique user experiences like special insights.
00;41;17;17 - 00;41;35;24
Unknown
I just think the more we can clear our minds of, the kind of like tactical day to day operations stuff. And the more we can focus on like depth of creativity, better our products are gonna get. So I think it'll change. I will definitely be wrong for many years. And then suddenly I'll be right. So I look forward to that.
00;41;35;27 - 00;41;57;21
Unknown
I love confidence, and I highly recommend folks go check out Chat Pod AI and explore more of Claire's work. Claire, where else should our listeners go to learn more about you and to follow what you're up to? Yeah, I'm on ECS at Claire Vo, also LinkedIn. That's my name. I'm on TikTok. We're reviving the TikTok. You heard it here first.
00;41;57;23 - 00;42;18;01
Unknown
I have chief product officer on TikTok. So where to that content and then tune into the how I, how I podcast, where I talk to other people about how they use AI. Fantastic. Well, Claire, it has been a distinct pleasure having you on the show here. Thank you for an entertaining and wide ranging conversation. I'm excited to think through some of your advice and implement it myself.
00;42;18;01 - 00;42;34;20
Unknown
So it's been a ton of fun and we really appreciate you having you on. Thanks so much and for everyone listening, make sure you check out our YouTube to see so much more about Pardon Me, make sure you check out our YouTube to see so much more behind the scenes content. You can find it at Run Galileo on YouTube.
00;42;34;27 - 00;42;55;05
Unknown
There's demos, webinars, many more incredible podcast with guests like Claire and we'd love to have you there. Thanks so much for listening and we'll see you next week.