Honest conversations with the engineering leaders, CTOs, founders, and engineers building real software with real teams. No fluff, no hype — just the messy, human side of getting great products out the door.
Zhenya Rozinskiy - Mirigos (00:06)
Hey Corey, thank you so much for joining me in this podcast recording. We are doing a podcast talking about what I call business of software. The name of the podcast is actually built by humans where we concentrate on teams. We concentrate on what it takes to get products out there. We talk about business challenges and sort of the business side of getting.
apps, software, whatever it is out there, not so much the technical aspect. My name is Zhenya. I run a company called Mirigis. We're a team augmentation company. Our tagline and whole motto is we fixed what's broken in outsourcing, where we concentrate on getting the right people for the teams, not getting the right skills for the team. And that's the biggest differentiator between us and others.
that try to do the same thing. Before that, I spent about 30 years in tech, running different engineering departments, running anything from small startups to Fortune 500 companies, and came up with this idea that solutions could be found in a better way. And so that's what I'm on for now. turning over to you, tell us a little bit about yourself.
Cory (01:19)
Yeah, of course. So my name is Corey Heimle. I'm the VP of product and research at a company called Crowdbotics. We're about six years old and we're really focused on building AI models that are able to deeply and intelligently understand code. So we're not in the business of doing code gen, but really we're trying to solve the problem of there's, you know, 800 billion lines of
Existing cobalt out there running the world. Everyone's retiring. There's no documentation and you know, we're getting to an inflection point where Someone or something needs to be able to intelligently understand why and how these things work So like I said lead up all of our product as well as a bunch of our research my backgrounds in Computer science and applied math. I've been doing AI research for a long time. I started back in 2007 with FedEx for self-driving cars, which then you know, we were
I was in the dynamic obstacle handling unit, which is a really fancy name for make sure this thing doesn't hit other things that are moving. ⁓
Zhenya Rozinskiy - Mirigos (02:20)
Right. Wow, 2007.
That's a while ago. That's pretty cool.
Cory (02:25)
Yeah, it was it was great. And you know, to see where the tech has come since then, not just from a pure engineering side, but just from like a market acceptance side, seeing how businesses and business leaders are, you know, really starting to see ways that it can impact their business beyond, you know, where we were at then, which was a much more like data analytics and predictive side into more actually using the tech to do things. So it's a really cool space, really cool time.
Great to watch it grow over that period.
Zhenya Rozinskiy - Mirigos (02:54)
AI is obviously a big topic and I'm sure we'll come to it in some questions that I want to ask. But before I sort of want to talk about robotics, you guys are thinking code, right? You're analyzing code, you're using a lot of AI methods. How do you think that changes from time with the time and what do you think it means to be a developer? Because I think that's a big aspect today where computer science have even
few years ago, right? not, not only about 20, 30 years ago, I'm talking about a couple of years ago, and today is very different.
Cory (03:26)
Yeah, I mean, the developer profile is changing dramatically, right? And, you know, we've got, I've got beliefs as do others on what it'll look like in the future. But, you know, the interesting thing that we're seeing here and now between developers and is that there's this pretty, there's a pretty solid line between developers that love AI and ones that don't. And, you know, we've got quite a number of like very senior engineers that just don't want to touch the stuff.
don't think that it's useful. They might think it's useful, but it's not really applicable to them because they're dealing with the really hard problems. They shouldn't have to use it. And then as a lot of studies and research has shown that it's typically kind of the more junior to mid seniority developers that get a ton out of the AI tools. But what we have seen that is really great is that with tools like
lovable with a lot of these other ones that able quick prototyping is that people within our org that are just kind of self starting to go create things with software that never would have been able to do it before Really are starting to do that and we've seen some Kind of emergent properties that I don't think that we were expecting to see You know, for instance, we've got a customer success team. They deal primarily, you know within customers making sure they're happy
you know, before, if there was ever changes or stuff that needed to be done, they would submit a ticket back into engineering. would go into a two week sprint plan. You know, we'd have to refine the ticket down, assign it out. And then, you know, maybe in four weeks or so after QA, you would get it that they could show customers. you know, what we're seeing today is, you know, our four deployed engineers, as well as people in our customer success team that aren't engineers are using these tools that in real time, you know, at the edge are creating solutions and.
from a product standpoint, at first, I think the initial reaction is that people shouldn't be doing that, right? Like product has this sacred roadmap that no one should be touching. But what we've seen is that this development and quick iteration at the edge is allowing us to test and verify and validate a lot of functionalities with customers faster than we ever could before.
And then those features and stuff that we see coming in, we're able to help kind of prioritize and build a more intelligent roadmap, I think, off of that. you know, a roundabout way to ask about how the developer profile is changing. The long view is that anyone will be able to use, you know, whether it's AI or something else to solve their acute problems really quickly. And...
you know, we're, really excited to see our teams starting to adopt that. And then how we've built processes to roll those, you know, learnings of this next wave of devs back into traditional workflows, like agile sprint planning and engineering teams and product roadmaps and stuff. So it's a, it's a really cool time.
Zhenya Rozinskiy - Mirigos (06:03)
Yeah.
I see the same thing with AI adoption. Actually, I would put it in three categories, right? People that are absolutely against it for various reasons, For sometimes it's job security. Sometimes it's just, I don't like change, whatever it might be. Then there's another extreme that, and in a good way extreme, right? They use it, they're all in, they're completely using it in the right way.
And to me, like I use AI a lot, right? I use child GPT, I live in child GPT, but it's not my go for an answer point. It's my go to brainstorming, right? It's a copilot, right? It's a conversation. And so there's a third segment, which is right in between. They think they use AI, which in reality means they use child GPT, you know, as instead of Google search.
All right, so they'll go, they'll ask a question and then they're surprised they don't like the answer. I've had this with marketing folks, with content generation. Yeah, I don't use it because I asked it to generate a whatever, a post for me and it didn't like it. Well, of course you didn't like it. And because you got to work with it you got to navigate it and you converse with it. And then eventually, right, it may take you half an hour to an hour of that conversation. And, but you finally get something really, really good. But.
going to culture question. Among your team members, you've got this various engineers that are, you know, using it, not using it. There's probably some that are, you know, ahead of everybody, like, yeah, we want to use it for everything. And there's some that we're not going to touch it at all. Does that cause any issues? Does that cause any cultural problems as in, you know, I want to keep things the way they are versus I want to change absolutely everything about what we're doing.
Cory (07:44)
I don't know if it's so much, I don't know if we've necessarily seen much like culture conflict, but I will tell you like the, like one of the interesting things that recently came around that I wasn't expecting is that so, we've got a core team of some really good, like seasoned, you know, AI engineers and like good senior architect engineers on a team together.
And some of them are using AI, co-gen stuff, you know, like in the IDE and Z or whatever, either running and they're, you know, they're turning out tons of work and tons of code. And.
I haven't, I haven't quite been able to put my finger on there's there is no animosity between this team, right? So imagine we've got a team of team of rock stars, or, you know, if you want to think about it in sports terms, is that you know, you've got a football team or a basketball team and half the team decides that, you know, steroids are okay, and we're going to use these and now all of sudden, they're outperforming to some degree.
Zhenya Rozinskiy - Mirigos (08:22)
Yeah.
Cory (08:37)
the rest of the team. What does that do for culture? What does that do to perception? Not just from top level management, but also dynamics within the team. We haven't seen any animosity or prejudice between people using AI to produce more and those aren't. But what I do think is interesting is when something goes wrong, in the sports example, the person that decided to take performance enhancing drugs gets hurt.
or someone on our team that's using AI to write code turns out a bunch of bad code on accident and checks it in, it's really quick for people that aren't using it to point and say, this is why we shouldn't be doing this. this turns an ant heel into a mole heel. I can't remember what the analogy is. there is this kind of, and again, I want to be very
sensitive here, like there's not tension, but I can see how if you were to have this happen in some other team or in some potentially teams that will build, it could end up driving a cultural divide or a cultural wedge of people that want to use it and they're performing more and they're outputting more. And then people that aren't, but there's, you know, standing tall that human generated output is better. I do think it is something organizations will have to, have to deal with.
pretty substantially in the future when it comes to software development specifically, right? Like we run the classical like two week agile sprints, you know, but what we haven't seen is adapting the speed and time of processes to go hand in hand with AI performance and speed and enhancements as well, right? So, you know, typically we'd go to a backlog of a sprint, say we've got 30 story points traditionally without AI, you know, that 30 story points would be enough for two weeks.
But now when we do have AI, you we're not really factoring that into the sprint planning and kind of time estimates. So do we, do we compress the sprints? Do we add more work in? Well, we can't because some people are using AI and some people aren't. So you can't, you know, kind of tip the boat. it's, it's been good, but that's this weird kind of delivery and planning side. And also to your point, the cultural side that I think a lot of kind of growth and analysis will be happening.
uh, you know, in years and I can see, you know, a censure or someone coming out soon, like, let us tell you how to do AI and then, uh, and charge you $17 million.
Zhenya Rozinskiy - Mirigos (10:48)
They absolutely will.
Yes, they absolutely will. All right, as a hiring manager, you have been hiring people and in that role, you also have to predict, right, sort of how they'll perform. you get, whether you're building a whole team or you're taking one person integrating in the existing team, there's gotta be that prediction. This is how it's gonna happen. What do you think of some of the biggest myth?
that founders, heads of engineering have, or what they believe that's not true in what will and will not succeed.
Cory (11:21)
So what do they so when it comes to hiring, what do founders
Zhenya Rozinskiy - Mirigos (11:25)
Yeah, so some
people have a belief, right? If I'm going to hire this type of a team or I'm going to hire this type of person, it will succeed. Or if I hire this type of person, it will not succeed. And some of it is true and some of it is just completely, it's a myth. If I'm going to hire somebody, I'll start with our business. I've had so many people tell me, if I don't hire people in the United States, they will never succeed. I got to hire everybody you assign. That's a myth.
But there are still people that believe that very strongly.
Cory (11:53)
Yeah, mean, okay, I understand. Yeah, so that's a really good question. So previous to my tenure here, I spent a number of years at a firm called Gixter. We were a network of about, when I left, about 80,000 developers, designers, and project managers, whatever. And our whole stick and our whole thing was building predictive models to determine potential team success to using freelance workers. So the problem set was, you've got to...
You've got a global workforce from all different countries, all different backgrounds. Can you be predictive about how these teams will succeed? And, you know, over my course in tenure there, I assembled and helped run and closed out hundreds of different teams, like literally built hundreds of different teams over that time. And what I found consistent with teams that were successful and teams that weren't was that there's two
there's two pieces to it. One that just looking at hard skills and just thinking that, if I just go hire this rock star machine learning engineer, my AI company is going to go to the moon and we'll solve all the problems. Not the case. What really is more important is how you're able to build a soft skill dynamics between the teams, ones that can manage amongst themselves good conflict resolution, good teams that are able to drive strong brainstorming.
activities, you know, having a team of, you know, maybe B or C players that all match really well from a personality standpoint will typically give you better outputs of a bunch of potentially technically stronger candidates, but fight at the drop of a hat. Now, secondary to that, you know, the what I've seen consistently through these hundreds of teams that I put together is that
You can build the perfect team from a technical standpoint. You can build the perfect team from a soft skill step standpoint. But as a founder, as a CEO, as the person hiring, it's your job to set them up for success. And what I often see people not doing well is defining and driving clear, measurable goals. When it comes to engineering teams, a lot of, you
Software engineering for a long time has been a bit of this like dark art science. You know, you don't want to press them too hard. You want to give some degree of creative flexibility. It's impossible to measure a performance, et cetera, et cetera. All of that is not true. There actually is a ton of benefit in treating engineering teams similar as you would any other one and providing clear metrics, know, clear goals, measuring lead metrics, lag metrics, performance.
Zhenya Rozinskiy - Mirigos (13:54)
Yep, absolutely.
Cory (14:11)
And when I say performance, it's, you know, to a degree of, we delivering against the sprint, you know, that we're planning? Not necessarily. Let me put a key logger on my backend developer and make sure that he's typing for eight hours a day. It's not, it's not what we mean by performance. And yeah, that's, that's the big one that I see people get wrong all the time. Just, you know, if I can get this AI engineer that was at, you know, I don't know, SpaceX and came from Stanford, like that's going to solve our, our issue.
Zhenya Rozinskiy - Mirigos (14:19)
Bye.
Cory (14:37)
Probably, probably not. It might help, but that's not the whole point. Or exactly, or it may create more problems. It may come in and like butt heads with everyone. And now you've just introduced like a toxic culture and things go from bad to worse pretty quick.
Zhenya Rozinskiy - Mirigos (14:40)
or it may create more problems. Right.
Mm-hmm. Yeah.
Yeah, we actually see this a lot in our job. And there are two types of hiring managers, I would say, and companies. Hiring managers, sometimes they dictate the culture. Sometimes the culture dictates who the hiring managers are. Some will give us a job description and it'll have a bunch of skills. You got to know this, this, and this, and this. And they will look like if somebody comes in and they have four out of five, they won't look at it. They're like, nope.
That person doesn't know X, and Z. And we always try to explain if they learned four and they're probably 20 other things that they've learned that you just don't care about, they can probably learn the fifth one. If they're smart, if they're good, if they're right, if they're the right person. At the same time, they'll gladly jump to talk to somebody who's got all five, but it's not the right person. It's not the right profile. Their startup and this person is coming from large enterprises or vice versa.
Now we've had this, have some larger clients and we've learned somebody who's spent their entire life in startups probably not gonna be fed because they're not used to processes, they're not used to the procedures, they're not used to just how things work. They wanna just give me stuff and I'll go do it. And one of the things though, what we say we apply in our methodology, we match people with our match resumes. And we do, actually use AI. We built some of our own agents.
all are using the ChudGPT platform, but they're all custom agents that are evaluating resumes, they're evaluating profiles, they're evaluating even transcription of an interview, right? Technical screen interview. And it's pretty good, we've tried experimenting, right? We haven't tried it in production, like seriously doing this, but we've experimented of using AI completely evaluating resume.
And one thing I can tell you, we're nowhere close to being able to do that reliably. It still takes people to understand people. And I'm glad, right? I'm really glad that that's the case. So we tried it not so much because we wanted to do it. We wanted to prove that it doesn't work. And like, yeah, okay, it doesn't work, great. So everybody else was claiming, yeah, we have an AI machine that will match your perfect candidate.
Cory (16:50)
Yeah. mean, that's, I mean, you're, you're so right. I cause I mean, the big fallacy in the AI space today is that, you know, marketing departments and things have all painted AI as this super intelligent. can do anything, you know, type of tool when in reality, you know, they're just prediction machines. They're just predictors similar to when you're typing a text message, predicting the next word, they're just doing that at scale. So to try to introduce them into
Zhenya Rozinskiy - Mirigos (17:08)
Hmm?
Cory (17:17)
And we'll use like hiring as an example, but any type of activity that requires that human connection or requires determining.
decisions that will lead to human to human interaction, I think is a terrible place for AI. It is a place that it will have a tough time doing because it's again, it's just predicting on, on data that it was trained on, which is the, which is the open web, which is also a pretty, a pretty wild West place. When you think about it at the end of the day, you know, it's to think that you can just stick it in and let it determine if this person's going to be a good cultural fit or a good human type of fit is a
Zhenya Rozinskiy - Mirigos (17:45)
Yeah. ⁓
Cory (17:53)
is typically not a great way to do it. So I'm great. That's awesome. You guys did a test and proved it doesn't work. would assume it was true. And here are some empirical evidence though, is what we like.
Zhenya Rozinskiy - Mirigos (18:01)
Yeah, absolutely. So normally, right at some point, I ask people about technology and how technology will change our lives. And I think this entire conversation was about that. So that's not the question. But I do want to ask something. One thing that I'm concerned about, and I'm sure a lot of people are the same way. And you mentioned this at some point, AI is great at helping and at some point replacing
very junior positions, very junior roles. For example, we had a client who always, not through us, we just knew about that, they have, what they had, a summer internship program where they would hire kids out of college, know, or those that are still in college. And their whole play was, they're doing some basic stuff that nobody else wants to do, and we really don't care about it. It's our investment, and they invest time, they invest money into this program.
But the idea is it's our selection pool for future hires. Well, this year they eliminated it. And the reason they eliminated it is because they said, well, everything that nobody else wants to do, we now do with AI, right? So we have people that does the basic entry-level stuff that an intern would normally do. And we've also significantly decreased junior hires for the same reason. And I asked them a question which the guy, the CTO, just couldn't answer.
I don't think you thought about it long term. said, okay, great. So you're not hiring junior people. That means you're not backfilling your organization. What are you going to do in three, five years when your senior people leave for any reason, right? They find another job, they retire, they just don't want to do this anymore, whatever the reason might be. And you don't have anybody who you grew in house that knows your system. So now you, yes, you can hire senior people, but they're all going to be from outside.
And he didn't have an answer for that. So my question, how do you predict or how do you see engineering teams and how they would be assembled, say, five years from now? Not in a year, but in five years, where we see the progression of this AI is so rapid. Six months ago, it was a toy.
Cory (19:58)
Mm-hmm.
Zhenya Rozinskiy - Mirigos (20:11)
And today we're really using this on our day in, day out lives.
Cory (20:15)
Mm hmm. Yeah, it's a it's a really good question. You know, the way that I the way that I would think about that, the way that I'm thinking about that is that
Typically one of the hardest pieces about hiring or bringing in new engineers into an existing team is their ramp time to be able to understand the code base, be able to understand, you know, what is going on, you know, sometimes to ramp an engineer to learn like a large enterprise code base might take six to eight months. Um, you know, maybe call it a little for some easy math. Let's say it takes them four months to ramp in to get up to speed. And then you kind of need to give them at least four sprints or about two months to start really determining if they're worth their weight.
you know, if you want them part of the orgs, that's half a year you've lost. And so typically the motion for large enterprise organizations is that you do this kind of, you know, this grooming up so that you reduce the amount of time for onboarding, you're able to gauge and understand performance early on. So you know whether long-term investment into an engineer makes sense. And I think a lot of that will be disrupted.
with the new technology that we have today, you we're already starting to see that, you know, within a few minutes, a large language model is able to contextually understand, and help a developer get up to speed with a code base rather than taking months. You know, they can do it in a few months or even, or I'm sorry, instead of taking like maybe four months, they can do it in two. so you just cut that ramp time down by 50%, which means you've also kind of reduced your risk.
of bringing in external developers by 50 % as well. So in your use case example, as that CTO, would say that primarily our strategy for the intern and hiring program was to understand who we wanted to make long-term investments with so that there are in a good steady state to contribute to our code base over time. But I believe that in the future, with things like AI copilot to AI coding assistants, as well as
leveraging these tools to be able to capture a lot of the ephemeral decision-making and intent processes that go into code design is going to allow us to roll in new talent much more cost-effectively with a lot lower risk. And a lot of my views too on this, come a big proponent of the whole flexible workforce and the workforce of the future and having these elastic teams where engineers will be able to drop in and drop out.
move around how they feel free. I am a big proponent of that. And I've seen it done successfully, right? When you set it up for success of having things well documented and have good contact stores, you know, or logging decisions on why things are made. It does work, you know, and no one ever did that in the past because it's a ton of overhead and developers hate writing documentation. They hate taking call notes. But now you do have a pretty, you
would say, semi-smart way or semi-automated way to do that with this great, fascinating new tool that we have called large language models. So yeah, I think the hiring profiles are going to change pretty drastically. I mean, we already see it a little bit today. I think Microsoft cut a ton of their engineering staff. I know Google had cut down. think Meadow was as well. And I think part of that is that they have this strong belief that they'll get that productivity back with
AI coding assistance from their existing teams. But I also do believe that there's some nugget of belief that adding those engineers back on in the future is going to be a lot less painful than it was in the past. that's the business decision. It's very unfortunate from a future workforce and not having that opportunity to get exposed at an early age and a good place to fail in an internship program to learn and make mistakes.
Yeah, I think the future developers and engineers out there are going to be, it'll be a different experience for them coming in than it was for you or I.
Zhenya Rozinskiy - Mirigos (23:48)
For sure.
For sure. All right, Corey. Well, thank you so much. It was very interesting. Thank you. Thanks for your time. Appreciate it. And I'm sure we'll talk again.
Cory (23:57)
Of course. Thank you so much.