Making artificial intelligence practical, productive & accessible to everyone. Practical AI is a show in which technology professionals, business people, students, enthusiasts, and expert guests engage in lively discussions about Artificial Intelligence and related topics (Machine Learning, Deep Learning, Neural Networks, GANs, MLOps, AIOps, LLMs & more).
The focus is on productive implementations and real-world scenarios that are accessible to everyone. If you want to keep up with the latest advances in AI, while keeping one foot in the real world, then this is the show for you!
Welcome to Practical AI, the podcast that makes artificial intelligence practical, productive, and accessible to all. If you like this show, you will love the changelog. It's news on Mondays, deep technical interviews on Wednesdays, and on Fridays, an awesome talk show for your weekend enjoyment. Find us by searching for the changelog wherever you get your podcasts. Thanks to our partners at fly.io.
Jerod:Launch your AI apps in five minutes or less. Learn how at fly.io.
Daniel:Welcome to another episode of the Practical AI Podcast. This is Daniel Witenack. I'm CEO at Prediction Guard, and I'm joined as always by my cohost, Chris Benson, who is a principal AI research engineer at Lockheed Martin. How are doing, Chris?
Chris:Doing great, Daniel. How's it going today?
Daniel:It's it's going great. I was just commenting before we hopped on about I'm feeling the emotional boost of seeing the sun again after a after a long Midwest winter. So feeling good today and and excited to excited to chat about all things AI and code assistant and development and, all of those things because we have with us Kyle Daigle, is COO at GitHub. Welcome, Kyle.
Kyle:Thank you so much. It's so great to be here.
Daniel:Yeah. Yeah. It's awesome to awesome to have you on. Just even in your comments about how you like to think about the the practical side of of AI, this is this is your place. So I I already feel a kindred spirit.
Kyle:I feel very much at home already.
Daniel:Yeah. Yeah. Well, speaking of which, I mean, you're of course really kind of at the center of a lot of what's going on in terms of code assistance with GitHub Copilot, of course, but you're also, I'm sure, seeing a ton of things out there. I'm wondering if you could just kind of take a 10,000 foot view and kind of for those that maybe aren't following all of the things happening with AI code assistance and development, what's kind of as of now sitting in, what is it, March of twenty twenty five, you're listening to this, what's kind of the state of AI code assistance and and how people are kind of generally using those, right now?
Kyle:Yeah. I mean, it's so interesting to see how far I feel like we've come in such a short period of time. Right? It was only a couple of years ago when ChatGPT came out, GitHub Copilot came out. And back then, the novelty was sort of like, it wasn't gonna disappoint you.
Kyle:Right? For GitHub Copilot, you know, you would type some lines and and it would respond with, you know, a line, two lines, a method, etcetera. It was gonna complete your code. Very similar to, you know, I'm gonna ask instead of Google a question, I'm gonna ask ChatGPT and I can keep asking a question. I think what really, you know, locked in this enormous transformation then was finding a user experience that was simple, straightforward, and didn't need much explanation.
Kyle:Right? Like, I'm a dev. I'm writing code, and it's just working there versus, you know, needing to figure out how to use a tool, figure out how it works in my workflow, and kind of go through hours of onboarding. Fast forward a couple of years, right, not only have the models materially gotten so much better, but we found more and more ways to kind of have that similar joyful expected user experience with code assistance. So it's not just really about writing the code.
Kyle:In some ways, right, it's not about that at all right now. I think that's at the bleeding edge of what we're experiencing with code assistance where it's much, much, much more about sitting down with a couple of dev friends and saying, hey. I have this idea for an app. But instead of pitching it to your friends, now you're pitching it to your IDE. And that's code assistant is gonna jump in and help you get that next step done.
Kyle:So when I look back over this wave and how it went from sort of, you know, cool, but in retrospect, right, a little bit simplistic behavior of, wow, it really knows what I wanna write next into, like, the next level of what it's always been like to be a developer, which is I have this idea and now I have to explain it to someone else. We keep finding ways to augment, improve, and speed up what a dev does kinda every single day. And we're at a point now where I think we're seriously starting to blur the edges of, like, what is a developer? I don't think we're there all the way to be very clear, but I think, you know, a year ago, we were talking about that, and it was like, sure. And now it's getting closer and closer to say, you know, what is that distinct, that distinct need?
Kyle:And that's only really been in a year. And then, you know, about two and a half, three years from the start of, the start of this journey. And so I think the code assistant category has always been so interesting to me because it doesn't it's kinda matching how we work. You know, it's finding ways to augment and improve how we work, not trying to teach us totally to do something completely different, which I think when we zoom maybe from 10,000 feet to 40,000 feet and we look at AI, the best tools are the ones that are just helping us do work we're already doing. The tools that aren't the best or having more difficulty finding traction, in my opinion, tend to have to make the human contort to get the most power out of the AI tool.
Kyle:And so because we're devs, we're just kinda iterating in what we know. And that's been the power of, you know, code assistance and the growth of them, you know, over the last year or so, I think.
Chris:I'm curious. I mean, that's a it's a great point you're making there and and about the the changing developer experience and changing so incredibly rapidly. I mean, you know, month by month, there are changes in what it means to be a developer now. And so I know, and I'm sure I'm speaking for a lot of people, keep reinventing kind of parts of my own workflow as I'm doing stuff because new tools become available and what I am doing or not doing is changing constantly. It's both amazingly wonderful, given where we've been over the years, but it's also quite tumultuous.
Chris:And if I stop and kind of lean back a little bit and have a cup of coffee and think about it, I'm kind of going maybe a little bit scary in the future about how good it's getting and where that's going. What are your thoughts on, since you've talked about the developer experience explicitly and the user experience of Code Assistance and that they rapidly going so far ahead, What kind of thoughts, and don't even go out a long way, just talking in the next few months in the short term, like where, how can we be thinking about adjusting ourselves to an ever evolving state right now as we're as we're trying to think about that even before we get into the specifics of the tools themselves.
Kyle:Yeah. Yeah. I mean, you know, I think, what we've seen at GitHub by rolling out these tools is, like, we'll talk to customers or I'll just talk to devs or open source maintainers, etcetera. And they can kind of fall on this continuum. Right?
Kyle:This continuum of, I absolutely love every AI tool. I'm gonna use every single one, and I'm gonna try every single one. And then you have the folks who are like, I'm never touching these things ever. They're terrible and they're gonna destroy software. And then there's all the folks in the middle.
Kyle:And so I think the thing that I tend to tell folks is like, you know, just like in our careers, we've all had a moment where a new piece of technology comes in. And I feel like for some reason, it's in at least 50% developers' minds of like, oh, well, that's just a silly thing or that's just a toy or whatever. So I'll just say for myself personally, Rubyist by nature, JavaScript took over, and I'm like, ugh, JavaScript. Well, Ruby is, you know, blah blah blah. And so, like, over time you grow and you realize, oh, well, I should really understand that and try it out.
Kyle:And it may not become my new go to tool, but I I it would not help me or honestly, like the industry or my peers for me just to be like, ugh, I'm never gonna touch JavaScript. So I think that experimentation that you were talking about, Chris, is the important thing. I see a lot of devs, like, try out a new tool or try out a new feature or a new library or a new model and then drop back to whatever their floor is, whatever the thing they're most comfortable with, the model they know, etcetera, etcetera. And I think that is the minimum because the change is gonna happen just like it's always happened between serverless languages to databases. You pick it.
Kyle:Right? And if you just don't experiment, my fear personally would be that you do kind of start to get left behind because you don't know how to reach out to the new tool that is actually excellent and actually helpful. And you're, you know, kinda stuck behind the eight ball learning something that you could have been learning as you go. I will say, like, you know, in the next few months even, not even just kinda like now, I do expect way more kind of, AI functionality to come outside the editor. Because if you're if you're developing software as part of a team or as part of a company, not as a solo dev or a smaller startup, but a bigger group, you we all know, like, writing code is a important part of the job, but it's not all of your day.
Kyle:Right? You're reviewing code. You're building out decision records or architecture diagrams, or you're debating how to roll this out. You're operating a live site, so on and so forth. I think as AI comes into those spaces to fill in the gaps more and more, again, like, you're gonna wanna have those skills from, you know, figuring out the right way to word things when the AI can't just figure it out or the LLM can't just figure it out on its own.
Kyle:Or, again, like every, developer, know how the system is working inherently so you can best, best benefit from it. So as long as you're kind of trying these things out, even if you drop back to your baseline, I think you get set up for, more productivity and I think just kinda like more joy when the AI can take more of those mundane tasks away from you. Again, like, I think over the next couple months, not even the next year.
Daniel:What do you think are the for those devs, you know, some that have jumped right in. They figured out their work flow. Maybe there's devs out there that are experimenting with the tools. What do you think are those new kind of you know, everyone's kinda got their muscle memory of how they how they develop, you know, the things that they like to use. What are what are kind of the new the new muscles that need to be developed for kind of AI assisted coding?
Daniel:Like, the most important ones that that you've seen over over very many use cases. Yeah.
Kyle:I mean, I I think there's kinda two major ones. Every developer has, like you said, you know, come up with, the kind of practices and principles for you personally. Right? We've all worked in systems that have linters and CI and everything that, like, stops you from making mistakes. But there's just also a bunch of things that I like to work in this order.
Kyle:It helps my brain process what's going on. You know what I mean? And so I think on a tactical level, stating those rules, those prompt instructions, whatever, we're you know, depending on which tool you're using for this, there's a different name for it. But I do think that that's something just the act of sitting down and writing out, well, how do I work on this project? Even if you work as a part of a company, you know, how do I care about it?
Kyle:I always wanted to find a schema for the back end API before I implement the front end, and then I go back to the back end or whatever the thing is for you. Writing that down and then letting the tool use that, I think, is a dual benefit, which kinda gets me to my second point. The big skill that everyone is, I think, trying to work out is it used to be called, like, prompt engineering, and I honestly think it's just describing a problem. We use so much shorthand and sort of we skip over the details like the hilarious meme of, you know, what the product manager said or what the engineer did, what the designer did. But that is exactly what we have to do with these tools every day.
Kyle:We go build an app that x y z's, and suddenly it comes back and it makes no sense. And you you go, oh, this stupid thing doesn't work, you know? And yes, sometimes it just doesn't work. But realistically sitting down and saying, well, what are the must dos of this app? You know?
Kyle:How do I want it to work? What do I want the flow to be? Whatever those things are. Being able to clearly communicate, particularly in a written form, is, like, crucial in this new era. And I think it's been a skill that in some ways we've kinda let fall down.
Kyle:Like, you know, when I think back to, the era in which I was a much more active dev, you know, I think there was just so much written communication, whether that be blog posts or GitHub's always been remote. And so for us, it was usually like a GitHub issue or Campfire back in the good old days, Slack these days. Just just, you know, writing down what you mean. That's a skill to bring to saying what I want this app to do. And I think that's why when you're on Twitter or X or wherever and you're looking at, you know, wow, how did this example get one shot?
Kyle:It's like ask for the instruction. That instruction was certainly not build a multiplayer game that allows me to fly airplanes. Like, that was not it. You know? It was much more.
Kyle:But with all the practice that came from describing problems socially, describing problems for your LLM, being able to do that regularly, and I really think it's mainly describing problems as the models have gotten so much better. There's a little less like, how do I make it work for each model than there used to be? That's a skill that's gonna serve you both in those tools and with your colleagues, with your manager, with your open source, friends and maintainers just cohesively if you can do it really well.
Chris:I'm curious to that. Do you think as just as a two second follow-up that for developers, kind of that that describing a problem skill that you've that you've been addressing Mhmm. Along with kind of the communication skills that support that, should we think of that as developer skills now? And and, you know, maybe that is a muscle that we have that we should start exercising as well.
Kyle:Yeah. I think it was I think it's something that we like, the best teams, the best companies have considered that. And I think we've kinda let a little bit of the, like, 10 x developer meme take over and make communication not be as big of a deal. There's no major application site or app that serves hundreds of millions of people or tens of millions of people where being able to communicate what's happening or what the problems are isn't core to the job of being a developer. And if we just play out over time, you know, if AI and LLMs are gonna continue to write more and more and more and more of the code, even if it never hits, you know, all of the code, whatever that ultimately means, all that's left is collaboration.
Kyle:All that's left is collaborating with your peers, with LLMs, with agents, with designers, with your boss, with your client, whatever that is. And so suddenly, the fact that you can write an app incredibly well, succinctly well factored and tested, whatever. That's great. That's a that's a that's a great skill too. But the human factor will be, I can look at you in the eye.
Kyle:I can read what you're writing, intuit what you're saying, what you're looking for, and describe that in such a way that I can, you know, benefit from all of these tools. It's gonna be incredibly necessary as those, you know, more rote or highly automated tasks, can be done by, you know, AI tools.
Sponsor:Well, friends, today's ever changing AI landscape means your data demands more than the narrow applications and single model solutions that most companies offer. Domo's AI and Dataproducts platform is a more robust all in one solution for your data is not just ambitious, it's practical and adaptable so your business can meet those new challenges with ease. With Domo, you and your team can channel AI and data into innovative uses that deliver measurable impact, and their all in one platform brings you trustworthy AI results without having to overhaul your entire data infrastructure, secure AI agents that connect, prepare, and automate your workflows, helping you and your team to gain insights, receive alerts, and act with ease through guided apps tailored to your role, and the flexibility to choose which AI models you wanna use. Domo goes beyond productivity. It's designed to transform your processes, helping you make smarter and faster decisions that drive real growth, all powered by Domo's trust, flexibility, and their years of expertise in data and AI innovation.
Sponsor:Data is hard. Domo is easy. Make smarter decisions and unlock your data's full potential with Domo. Learn more today at ai.Domo.com. Again, that's AI.Domo.com.
Daniel:Well, Kyle, one one of the things that, that I was thinking about the other day was there's a sort of generation of developers that are growing up sort of not having any other experience than having this sort of AI, assisted experience, both on the kind of, like, educational debugging IDE side, but also, of course, using, you know, interesting tools, whether it be kind of, vibe coding tools or or other things. I was listening to the a 16 z podcast, and they did, like, a I think it was them. I forget where. It was, like, somewhere they mentioned a a survey of the latest cohort of of Y Combinator. Yep.
Daniel:That that cohort of of companies, and they were saying, like, 95% of the code is is AI generated. What what kind of impacts are on your mind in terms of, like, this generation of coders that are really this is what coding is to them. What does that mean for both organizations that are hiring developers out of that environment, but also new opportunities that maybe people that wouldn't have maybe broken into, developing cool projects or that sort of thing now have have opportunity for.
Kyle:Yeah. I mean, you know, I look back on how I personally got started coding, and it was because I wanted to build a video game. And I feel like that's not very unique, but it's one of those things where, like, I enjoyed playing video games, but it's still cool. Exactly. And I wanted to go build a video game.
Kyle:So back in the day, I went to, I probably Barnes and Noble and bought, you know, the read c plus plus book because you had to learn c plus plus if you wanted to write a video game. And that thing was, I don't know, 650 pages probably. You know? That that was an enormous book. And so that is a huge immediate barrier to entry to, like, learning because you're like, the reason I came here was to solve a problem.
Kyle:And if I just do 650 pages of how c plus plus works, I'll eventually get to build a text based video game, probably. You know what I mean? And at GitHub with, like, our teams in GitHub Education, I get to work with them and the the team there on, well, how do we approach learning in this era in a way where, like, we can bring that problem up front, which is essentially what vibe coding is. Right? I want something in the world.
Kyle:I wanna go build it. I think the piece that is necessary to continue to learn is that problem solving piece. And I just wanna make it accessible to you so you can bring a problem, something you want to go learn. But in the process of getting you to your destination, we can just expose you to the ideas around why this application works this way or why there's two files, one for the front end and one for the back end or whatever. So you're kinda learning as you go, but still focused on ultimately, you know, solving that problem that you're going after.
Kyle:So I don't think it's a bad thing that, you know, these startups or folks online or even me on the weekend, I'm writing an app that is just for me. It's gonna have a user of one in perpetuity. I just want it to get written. You know? I want it to just work.
Kyle:But if we can help folks learn as they go, I think we'll actually create more, you know, craftspeople in a way similar to, like, I always describe, you know, changing out a light switch in my house. Like, if you own a home, we've all probably replaced a plug or a switch, but there's no way we're gonna go into the circuit breakers on our own. We'll probably fry ourselves. So we call in an electrician to come do that. But I'm not an electrician.
Kyle:I just know how to go change the light switches, and that's what I need in order to solve my problems. That's what I think learning coding in the AI era is gonna be, is that you can continue to start from this place of, well, I just want something, and that's fine. I think that's great, and it makes the idea more accessible. I wanna be able to get you to that, you know, journey person stage of, oh, okay. I know how this works.
Kyle:I understand variables. New technology came out. Oh, I wanna try to play with that, etcetera. But it's possible that at some scale and speed, we're still gonna rely on, you know, professional software developers in perpetuity, running these apps, building these apps, of etcetera. The real thing that's interesting to me about that stat I was talking to some teammates about is I really think there's a huge opportunity in operating the apps, and I'm a little dumbfounded that that hasn't been something that's been tackled yet.
Kyle:I mean, at GitHub, right, we kind of focus on, like, you've got to production and, like, okay, great. And then you use Sentry and PlanetScale and whatever Azure and so on and so forth to run it. But I I really think that in all of our probable life experiences as developers, the thing that bothers you is you get paged. You get an email. There's an error thing, you're like, crap.
Kyle:What is this thing? That is another place that I feel like as vibe coding continues, once you run an app and you have thousands or tens of thousands or hundreds of thousands of users, I'm not on the team like, oh, well, that's when you gotta bring in the serious people. No. Rewrite it the right way. I really think there's still space to just okay.
Kyle:Well, an error came in. The AI saw what it was. It resolved it. It wrote a test. The test passed.
Kyle:It deployed it to Canary or to a small version, and you just get a text message that's like, we fixed it. That I feel like is the next step of this, you know, era of writing learning how to code, writing, and deploying these apps versus deploying them and going, uh-oh. Now I need a real, you know, a pro to come in and help me out.
Chris:That makes so much sense. And and are you actually seeing anyone out there, kind of early people doing some of this? Is is this is this in the wild more than just you know, because we tend to think of AI in terms of writing the code. Operating the app makes perfect sense. Who's doing
Kyle:I think the, like, the issue here is it will require us all to work together. So, I mean, when I joined GitHub, oh my like nearly twelve years ago now, like, joined to work in the ecosystem on APIs and webhooks and how you connect everything with GitHub. And I really that's where my passion lies. It's in, you know, the hub part, you know, of like how do we get everything connected. And so as I look at, you know, how quickly the industry has gotten so excited about, MCP and being able to connect tools together.
Kyle:I'm really hoping this hype wave drives into something valuable, which will be if I can bring the context of my error tracker, my database, my two cloud services, my email provider, etcetera, etcetera, all together, then I believe it becomes possible for tools to work together to solve these problems. Unfortunately, right now, each tool is attempting to solve the problem that it can see, and I do not think that's terribly valuable. Right? As an end consumer, I don't wanna use three AI tools to solve an error in production. I want one.
Kyle:You know? I want one tool to do that, or I at least want them in some future magical state where agents all actually work together and blah blah blah. Then, you know, eventually, that could also happen. But I have yet to see a tool that is kinda tackling this, I think, because of the, like, interdependency problem that, a tool like that would have in this current, you know, very quick moving, AI tooling state.
Daniel:Yeah. Yeah. I think it's it's somewhat connected to, like, my concern around the ease at which all of this can get built is great. The burden on the debugging side, right, is is potentially potentially kind of growing and sort of you have all this stuff. And then I guess it's more around decision support in terms of making good decisions based on these overwhelming pieces of information because you built just so much stuff and you might not have kind of visibility and intuition around that.
Daniel:What what is your thought kind of because as more code is AI generated, there there's potentially not a good intuition even on how things are interconnected or like, oh, this function exists. Right? I didn't know that this function existed. Right? I've never heard this function name.
Daniel:I have no context there. So, you know, what's needed from a tool standpoint to really get the proper context around that kind of decision support or whatever you wanna call it for for the developers in in the tools that they're working in?
Kyle:I think, you know, for most of, modern, you know, history of software development, I feel like most folks are working in a relatively, like, highly high level language. Right? Lot of abstraction ultimately, and we're most of us aren't working in c or even lower than that. I think that in order to help us understand our code bases or our multiple code bases and multiple systems, right, like at GitHub, There's no world in which as a developer who works on, you know, webhooks, I'm gonna understand how Git, systems is ultimately gonna work for me. And so for me, I think the piece that I'm trying to figure out is how can we get more kind of, that higher level abstraction of how the code base is working available to me?
Kyle:And it probably needs to be in a way that as a human, I can, like, understand how that works more so than, you know, this class, this file, this whatever. I don't really need to understand that. I need to know that the webhook system is having an issue or this other piece isn't working or there's a bug over here where we process images. And then I can kinda click down and dive in and dive in a little bit more. Because usually, when you have an a bug, even if you do understand the system, your goal is to figure out what to ignore.
Kyle:You know? Like, you're like, okay. Well, it's not any of this stuff. It's gotta be over here. And I do think that similar to, you know, humans being good at describing a problem ultimately to, the LLM, I think the LLM has to help us abstract up to a level where I would draw on a whiteboard, you know, and then let me double click in and understand more deeply what's ultimately going on.
Daniel:Yeah. Yeah. That that's a that's a great point. It reminds me of, like, the sort of peak microservices days and, you know, everything everything expanded into you know, we're I was at a small company at the time, and I don't know how many microservices we had. And, you know, we had alerting set up.
Daniel:Right? But then the alert would go off and, you know, everything was dependent on everything else. So all the alerts would go off. It was either none of the alerts go off or all of the alerts go off. Then you're like, well, I give up.
Daniel:I like, where do I even hop in here? Yeah. It seems like a seems like a big, big opportunity. I guess in terms of the you know? And I wanna talk about, Copilot, specifically here in a second.
Daniel:But just in terms of the IDE specifically and at a more general level, how do you see kind of the the IDE you know, obviously, people are trying various things with, what both what Copilot's doing and Cursor and Windsurf and all of these things, all hands and all of that. How do you see that interface morphing over time? Do you do you see that kind of still kind of being recognizable in a year and a half or two years or or being something kind of completely foreign maybe to to certain people?
Kyle:I'm hoping that, you know, in the next, honestly, six months that a startup, just because of the nature of how these things move, you know, can kind of show us a future state that is in some ways backwards compatible. So what I mean by that is, like, GitHub has had Workspace. We kind of demoed Spark. All of these are kind of the code is stepping into the background, to show me the prompts, the thinking, and like a preview of what ultimately is being built. But right now, in IDEs, all the ones you've mentioned, and generally, all of them that aren't the sort of, like, idea to app tools, like Lovable, Bolt, v zero, etcetera, they all are still staying code forward.
Kyle:And I think it's necessary, you know, in order to attract an audience right now. Otherwise, it kinda you get pushed aside as like a, it's a fun toy. It's not really a tool that I'm gonna use as a professional dev. I do think in the future, though, I'm working with the app or the, you know, the web app, the actual, you know, iOS app or whatever, every time I'm writing code. Like, I'm writing code, I'm writing a test, and then I'm gonna go and touch the app.
Kyle:That last step is usually where where I figure out if I'm right or not. And when something's wrong, why do I have to keep bouncing back and forth between the result, the thing I'm trying to actually build, and code? And so there's a couple of tools out there now, right, that are kinda showing me the preview. And as I adapt that, like, the code is changing. And it gets to the most in like, maybe not the most, but one of the most interesting problems to me in this AI era, which is like the magic mirror problem, how do I continuously change a representation and have the code or the text or the read me or the spec match what I'm doing in the representation?
Kyle:So, yes, moving pixels is pretty easy. Right? I'm gonna go, oh, I changed this position or whatever. But what if I ask it to do something completely different? Right?
Kyle:How do I make sure that the code always matches that? And I think there's a couple of really interesting, like, attempts at that. But if and when models, tech, specs, etcetera, get better there, then I think IDEs will broadly be, you know, the prompts, the preview, the thinking so I can kinda correct and adapt. And then probably some way for me to, you know, click on a part of the app and not go make it blue, which is the demo ware that we all see, but instead be, well, no. No.
Kyle:No. I want this to be like a dynamic view that shows me this whole other, you know, basically another controller, another view, another app or whatever. And it'll code it right there and show it to me. Then I think we'll be even faster than we think we are kinda like right now, because instead we're going and manipulating by prompting. You know?
Kyle:Listen, turn okay. Well, now I'm gonna convince you AI to go do this thing. But it feels like we're still a couple clicks away because there's some actual hard problems to, to solve to let you go back and forth very, very easily because most companies are still, like, working in code ultimately via CI build systems deploy, etcetera. So we wanna make sure that everything matches up in the code base, not just in the app, or the visual representation of what we're trying to build.
Chris:So, you know, as we've been talking about code assistance and where things are going and stuff, I wanna I wanna get more specific for, for a moment because we got you here. Sure. Talk a bit about GitHub Copilot specifically, and kind of maybe maybe as a starting point on this, kinda talk a little bit about, you know, what the current state of GitHub Copilot is, kind of how the user experience is now, and as a starting, you know, toward what tomorrow and the day after is going to look like, and how you see that affecting, you
Kyle:know,
Chris:IDEs, adoption of of the technology, the whole thing going forward and and kind of, start a a path into the future from here on that on that particular item.
Kyle:Yeah. Yeah. For sure. I mean, you know, I feel like most folks are familiar with, Copilot one point o, we'll call it. Right?
Kyle:Like, everyone's like, okay. So it does code completions and cool. And, you know, in the last six months or so, we went from the yeah. It does code completions to, you know, now you can choose to use a variety of models usually within a day, if not the same day, of them coming out. There's chat, you know, the ability to ask these questions.
Kyle:Now there's agent mode available in Versus Code Insiders, which allows you to have that experience of describing a problem, watching it do the work, asking it to do something else, working across multiple files, the context of your entire repository, not just the file that's open, and make these sort of much broader, you know, changes to your application in the IDE still. As part of sort of the overall Copilot family, we continue to do these explorations like Workspace and Spark where we're sort of going like, were just talking about, What does it mean for me to plan out what I wanna build and then let Copilot as an agent go and figure out all the steps that need to be taken across multiple files, multiple repos to ultimately kind of build that app. So the goal instead of just saying, give me some lines of code or give me a whole, you know, method, is now starting with, well, what problem are you trying to solve? You know? Most of our devs are working in, you know, major open source projects or big companies, or they're starting to learn, etcetera.
Kyle:And so we wanna be able to let folks come from a problem. That could be a prompt in chat. That could be a GitHub issue. That could be a pull request that's already open, and you think that there's a piece of it that's missing. We want you to be able to just state what you're looking for, you know, and then let kinda copilot take it from there.
Kyle:So we kinda shared a little bit of a, you know, a preview of that path forward where, you know, we've all gotten bugs and we put them in our issue tracker, and it's, like, not interesting. It's gonna take a fair bit of time to solve, you know, or to resolve. And kinda reposing the question, like, why not just assign that to Copilot and let them work just like a dev would work, you know, trying it out, running the tests, the tests failed, commenting what they think they got wrong, continuing to go, and then asking for a human review. That's something that, you know, again, we're trying to model it after that experience of anyone on your team versus treating it like this magical tool that's, you know, always gonna get something perfectly right instead, just like you would, you know, explain with another dev friend. You can go in and help Copilot understand or just go, yep.
Kyle:That's totally right. Just change these two things and Copilot will do it, and ultimately deploy. So when we're sort of looking at the code creation process, which generally happens in IDEs, I think that's a big part of it. The part that's, in some ways, like, more exciting for me, as a dev is all the other pieces of being a dev, like I kinda said, you know, like, when I'm writing a or when I'm reviewing code, I'm a human being and so, like, I may not remember the exact, like, method signature of something, but this doesn't seem like the best way. And so to be able to work with Copilot in those moments or to let Copilot kinda just tell me, yo, Kyle, this isn't quite right based on what you know, how I know you work, and so it can show it to me and just let me accept the change.
Kyle:Or in actions in CI, why not let it fix the failures that come through, or let me define my actions workflow just by talking to AI versus having to go and build it myself. And so, you know, the real kind of magic I think of Copilot over the next year is how can we find moments both in creating code, but also in reviewing it, building it, testing it, deploying it, and let Copilot probably in a much more agent fashion, you know, having a multitude of Copilot agents that can work together and use the context not just of your code, all the code in your organization, but also the tools that you also use. If Copilot can reach out and get the information from them using MCP or a Copilot extension, then suddenly, it can take over the tasks that you probably didn't wanna do in the first place, to be honest, you know, less so those sort of interesting novel, I'm building my business around this tasks. It'll help you do all those things. But at the very least, let's let it take away the kinda rote pain work that I think, you know, every dev kinda has in their backlog, but it's been sitting there for the last, you know, two years, three years, or however long it's, artisanal, now.
Kyle:And so Copilot's, you know, really, really, trying to allow you to just go from problem to app or, you know, problem to fix via these new experiences in the IDE and Versus Code in particular. But also now in more IDEs, like we announced, you know, Xcode now has chat. A bunch of other editors also continue to have chat. So if you're in those environments, you can still use, you know, the power of Copilot. And then in GitHub.com, you'll see all those new experiences coming in, code review, being able to use an agent to, you know, build an actual, solution for you from an issue and kinda fix the other, you know, 80% almost of dev time inside the SDLC process that they're working in versus only focusing on that editor workflow.
Daniel:How do you think, I I I realize this is probably a complex question, but I get it posed to me a lot. Sure. So I figure you're you're the you're probably the best one to answer or at least have an opinion. But I I oftentimes, I get a lot of questions around this side of I mean, even in in what you just described kind of, here's an issue, a fix, you know, agents that can do this, especially around, like, the open source community and cogeneration. How how does this kind of influence, you know, licensing and kind of the ecosystem of open source over time from from your perspective?
Kyle:Yeah. I mean, you know, with Copilot and and what it's doing ultimately, that code that is being generated, whether that be generated for, you know, your business or for an open source project, we have tools in Copilot that you can basically say, hey. If this matches any public code, don't give me a match. And then it won't. You know, it's not gonna match anything from, the public, code base, that it has access to.
Kyle:And so in general, for folks that are most worried about, you know, well, where is this code coming from? Is it using code and generating code that looks like other public repos that I don't wanna match on? It can do that, just by setting a setting. And, for some of our sort of, SKUs of Copilot, we require that to be on. You know, you have to have that on, in order to sort of, you know, protect yourself, if there's any concern around, yeah, where is this code coming from?
Kyle:What's the license, etcetera? I think as we continue to move forward more and more and as we're looking at all the tools, you know, out in the market, as developers, I think we can all kind of intuit that there's only so many novel ways to write the same exact thing. And so you'll sometimes hear or I should say, I'll sometimes hear, particularly from open source devs, you know, going like, oh, Copilot won't write this for me. You know? It's not gonna it won't get why why why won't it give me the answer?
Kyle:And the answer is because that loop that you're trying to build is complex enough that it triggers us to look for a match. And because we have that blocking on, because, you know, you've turned it on, or the business has, it won't give you a return. And so it really depends on the business's personal preference or the user's personal preference on whether they want that public matching to come back to you. But in general, especially as we get into agent mode and we get into, you know, the ability to kinda create close to an entire app, you know, or at least a very complex set of files. You know, Copilot's gonna iterate and iterate and give you something that, again, doesn't match that public set if you have it turned off, but ultimately, you know, try to solve that problem for you.
Kyle:Every other tool has a different set of, you know, obligations like this or whether it's gonna use the suggestions, etcetera. But I think at the end of the day now, our goal is really to make sure that everyone's empowered to use this tool. They can choose, you know, how they want to use it and what kind of responses and suggestions, they want back. And that's why we give Copilot, you know, for free to students and, maintainers of very popular open source projects. And we're trying to find more ways to just let make sure everyone can have the tool, if they want to use it.
Kyle:Now Copilot free, basically, everyone, can use at least a portion of Copilot, And then kinda let them decide for themselves what what they're most, comfortable with as we keep going down this, you know, AI, future of coding.
Chris:As we start to to wind up here, we often will ask guests kind of, you know, what we refer to as the future question kind of going forward now. But we have covered so much ground, I'm going to ask you that. And I will say that as you look into the future, and you're kind of, know, we've covered we've covered everything from AI in terms of productivity with code to the developer experience to the GitHub Copilot product itself, and a bunch of tangential stuff. You go wherever you wanna go. Where do you think, as you are kind of finishing up for the day and you get through the crush, and you have a glass of wine, or maybe you're getting in bed for the night and your brain's kind of spinning in open mode, you know, where you're being creative, where does your brain go and where all this is going to go for us and what kinds of things, might be next that we haven't already talked about?
Chris:You know, what would you like to see aspirationally coming down the pike? Take us into your into your brain for this, this last question.
Kyle:Yeah. For sure. So, you know, if I were a good, corporate citizen, I'd be pitching you on something from GitHub. But that's not the honest answer, we're all developers in some way, and so the people will understand. I think true ambient AI that understands me and has access to my information and what I choose is the thing I'm most interested in coming, right now.
Kyle:You know? I think we've seen the power of the LLM, and I don't think we've honestly tapped into the vast majority of it. We're still broadly speaking in chat models, and that's incredibly boring to me. You know? I get it and why it's that way, but, like, I really think the next step is gonna be more about if you have all of my emails, my calendar, all the things that I'm currently sharing, that could be my purchases on Amazon, that could be, you know, access to sort of my doorbell camera and you see what I'm wearing on the way out, etcetera.
Kyle:There's all these experiences where we go to Google and we go, what's the weather today? Or we ask our assistant, like, you know, a a tool at the house or whatever. Or more complex, you know, like, when's the last time I what was the last episode I listened to of Practical AI, and what was it about? Because I'm going into a podcast recording, and I wanna remind them that Matt Collier is a friend of mine, and he did a great job with Sidekick and kinda so on and so forth. That ambient AI or that ambient intelligence where we're not, like, invoking an assistant, it's
Kyle:just telling me assistant. It's just telling me what I need to know when I need to know it because it has all that data about me is I want it. I desperately desperately want it. And I think there's a couple of, like, really interesting attempts at this. Like, there was Rewind AI that was a Mac app, and they kind of pivoted into this, limitless, tool, which is like a wearable plus all the apps that has the same idea. There's been a couple of, I won't, like, name them, but a memed versions of this thing, and that's not really kinda what I mean.
Kyle:I really mean the ability to finish my thought because you have all the context that I need, and I didn't have to set up 55 integrations or IFTT or Zapier to move all my data into a single place so that way g p t four five can answer it or whatever. You know what I mean? And I don't think we're that far off. I think that I find it incredibly interesting that, like, iOS and Apple Intelligence have been attempting to come up with what they're next up on, but I actually have some hope that they may solve this, because they haven't shipped their solutions, you know, and they kind of publicly are talking about how it may take longer than they thought. The biggest gap to this isn't LLMs.
Kyle:It isn't connecting all the data. It's privacy. I don't want all of this data sitting in an arbitrary startup's cloud or wherever, you know, to do this. For as powerful as all of our laptops are, there's still limits, you know, about how much it can do and how much data it has and what the models it can run, etcetera. I think someone that can take all the information, do it in a way that I'm personally comfortable with from a privacy perspective, both for me and for anyone that is inherently, you know, like, getting data sent from them into this tool, you know, like if I was recording my screen right now, for example, to be able to have all that and actually help my day to day life in a real way, you know, and reminding me of what's coming up and helping me do those things without the personification of a hey, Siri or hey, Alexa situation, just text, that's what I sit up thinking about at night and how to crack the privacy nut because I think that'll be required for us to do this in a way that is both really powerful, but also, I think, morally correct and and, you know, safe, for all of us to, you know, benefit from versus accidentally slipping into a even worse dystopia by letting all this information kinda, you know, get out into the wild in a way that we don't want.
Daniel:That's a great way to end it, Kyle. I I I also have hopes for for similar things, and, we we end on on the same wavelength again. Really appreciate you joining.
Kyle:Thank you so much. Thank you so much for having me.
Jerod:Alright. That is our show for this week. If you haven't checked out our changelog newsletter, head to changelog.com/news. There you'll find 29 reasons. Yes.
Jerod:29 reasons why you should subscribe. I'll tell you reason number 17. You might actually start looking forward to Mondays. Sounds like somebody's got a case of the Mondays. 28 more reasons are waiting for you at changelog.com/news.
Jerod:Thanks again to our partners at fly.io, to Brakemaster Cylinder for the Beats, and to you for listening. That is all for now, but we'll talk to you again next time.