Cloud Realities

AI is transforming software development—redefining roles, creativity, and community, while challenging developers to embrace ambiguity, orchestrate specialized agents, and stay human through empathy and curiosity. Will AI make developers more creative, or will we forget how the machine really works under the hood?

This week Dave, Esmee , Rob sit down with Scott Hanselman, VP Developer Community at Microsoft for a wildly energetic, deeply human, and brilliantly practical conversation about how AI is reshaping software development and what that means for creativity, careers, and all industries.
 
TLDR
00:30 – Scott Hanselman introduced as a special guest from Microsoft Ignite 2025.
02:16 – Scott discusses how AI is fundamentally redesigning all industries.
09:50 – Don’t anthropomorphize AI, I want the computer from Star Trek!
15:30 – Delegation: contrasting the roles of humans and agents.
18:30 – The importance of supporting early career growth and learning.
26:30 – Why specificity matters in AI and coding.
35:30 – Making AI delightful and fun.
45:30 – Always put humans first in AI development.
46:00 – Each morning I think about lunch.
 
Guest
Scott Hanselman: https://www.hanselman.com/
The Hanselminutes Podcast: https://www.hanselman.com/podcasts with over 1025 podcasts!
 
Hosts
Dave Chapman: https://www.linkedin.com/in/chapmandr/
Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/
Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/
 
Production
Marcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/
Dave Chapman: https://www.linkedin.com/in/chapmandr/
 
Sound
Ben Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/
Louis Corbett:  https://www.linkedin.com/in/louis-corbett-087250264/
 
'Cloud Realities' is an original podcast from Capgemini

Creators and Guests

Host
Dave Chapman
Chief Cloud Evangelist with nearly 30 years of global experience in strategic development, transformation, program delivery, and operations, I bring a wealth of expertise to the world of cloud innovation. In addition to my professional expertise, I’m the creator and main host of the Cloud Realities podcast, where we explore the transformative power of cloud technology.
Host
Esmee van de Giessen
Principal Consultant Enterprise Transformation and Cloud Realities podcast host, bridges gaps to drive impactful change. With expertise in agile, value delivery, culture, and user adoption, she empowers teams and leaders to ensure technology enhances agility, resilience, and sustainable growth across ecosystems.
Host
Rob Kernahan
VP Chief Architect for Cloud and Cloud Realities podcast host, drives digital transformation by combining deep technical expertise with exceptional client engagement. Passionate about high-performance cultures, he leverages cloud and modern operating models to create low-friction, high-velocity environments that fuel business growth and empower people to thrive.
Producer
Marcel van der Burg
VP Global Marketing and producer of the Cloud Realities podcast, is a strategic marketing leader with 33+ years of experience. He drives global cloud marketing strategies, leveraging creativity, multi-channel expertise, and problem-solving to deliver impactful business growth in complex environments.

What is Cloud Realities?

Exploring the practical and exciting alternate realities that can be unleashed through cloud driven transformation and cloud native living and working.

Each episode, our hosts Dave, Esmee & Rob talk to Cloud leaders and practitioners to understand how previously untapped business value can be released, how to deal with the challenges and risks that come with bold ventures and how does human experience factor into all of this?

They cover Intelligent Industry, Customer Experience, Sustainability, AI, Data and Insight, Cyber, Cost, Leadership, Talent and, of course, Tech.

Together, Dave, Esmee & Rob have over 80 years of cloud and transformation experience and act as our guides though a new reality each week.

Web - https://www.capgemini.com/insights/research-library/cloud-realities-podcast/
Email - cloudrealities@capgemini.com

CR117 Redesigning industries with AI with Scott Hanselman, Microsoft
[00:00:00] You got lots. We can fill an hour like that. Lots to go. Yeah. Hit record.
Welcome to Cloud Realities, an original podcast from Capgemini. And this week a conversation show about redesigning not only the software industry for ai, but maybe all other industries for ai. I'm Dave Chapman. I'm Esmee van de Giessen and I’m Rob Kernahan.[00:00:30]
And joining us this week to talk about this, and you're gonna hear him in a second, is Scott Hanselman. He's the VP of the developer community at Microsoft. And while we were at Ignite a couple of weeks ago, we took the opportunity to talk to Scott, didn't we? And I think we felt that Scott was somebody who you looking to get on the show, so we decided to do a separate recording with him.
And how did it go for you?
I'm so excited. It's, it's so thrilling. His energy, but also his way of, of talking about [00:01:00] what he does in a very understandable way. Non not always so technical, which I love.
He's an individual, I think, who has thought very, very deeply Yeah. About, about what AI actually is and means.
And I think he, he resonated with us all quite a lot in that conversation. Yeah. He also makes it very realistic, right? Yeah. Like, not, don't get too, uh, too excited. But still, you also feel his enthusiasm. So yeah, it's a great, great show. I hope our listeners love it as well.
And you know what the good news is, ez. We can [00:01:30] re-listen it all over again. Yeah.
He, he's, he's coming in a second, but even better than that, Rob made it all the way through. That's spilling anything on himself and he's still wearing clothes. I haven't had to turn up without lothecs on cause of, uh, if you recall on the live shows we did a couple of weeks ago, Rob kept spilling stuff on himself.
How you doing, Rob? I'm all right Dave. Thanks for bringing that up again. Yeah, you're alright. Right? Thought I pushed that trauma out in my mind. I've, I've thought it been a couple of weeks since we mentioned it, so I thought I better bring it up again. No, it's appreciated. It is absolutely appreciated. Thank you. I appreciate you. Thank you very much, mate. [00:02:00] Yeah, yeah, yeah. Anyway, so look, we're gonna talk to Scott now, so let's welcome him and do a pretty deep dive in the impacts of AI.
Scott's good to see you. How's it? Scott is the VP of developer community at Microsoft. How you doing? Dude? The darkness persists, but So do I. Yeah. Good man. Wow. Good man. [00:02:30] And what have you been up to? So you joined us at Ignite this time last year, just about. Was it ignite It all blurs together in Chicago.
Was in Chicago. Was Chicago? Was Chicago? It was was in Chicago, yeah. Has it been a year? That's crazy. It is. It is. Mental halfassed that, well, in some ways it feels like forever ago, but also two minutes Ago's strange. Yeah, yeah. Please tricks on you. So, yeah. Yeah. The days are long, but the years are short.
That is the way, that is the way. So how is, uh, ignite so far this year? Uh, it's too big. Too big. Too big, not too big. There's a lot of [00:03:00] humans. That is true. There's a lot of humans. I think, uh, what is it, 17, 18, 19, 20,000 people. It's a lot. Yeah. Yeah. So yeah. Every once in a while my social battery depletes and I run across the street to Chipotle.
Yeah. Uh, and hide with a burrito and, and hide out with a burrito, and then I return and when my battery is replenished. That's a good strategy though, is it go, go recharge with a burrito on your own. Well, but the trick is that when you're, when you're on a work trip, you're on an expense account. And when you're on an expense account, you can ask for extra meat and, uh, it's, you know, it's a victimless [00:03:30] crime and there were no calories if you didn't pay for it as well. So there's in fact no calories on business trips. In fact, you know that's right. McDonald's at the airport doesn't count. Oh, it's not really food.
That opens up opportunities. Oh yeah. We're actually going from burritos today with, there's a, there's a place it's actually called, uh, El Tia, I think, and it's like, it's a bit of an Instagram sensation here in Chicago. So we're gonna go's in Chicago. We're in San Francisco. San Francisco. That is gonna be a problem. Nobody. That's nobody. The park's not go that [00:04:00] far, my friend. Why? Why no, when are the jet lags bad? Yeah. Yeah. Why did we go on a plane for a burrito though? Yeah, yeah, yeah. Nobody knew the jet lags bad. Yeah. So apparently it's very good the uh, ELTA career. It's quite close to where we are. I take all of my, uh, taco based, uh, suggestions from the English. Yeah, absolutely. We well known for it. We well known since Chip, he's got that Dave, since Chipotle opened in the uk, we've, we've become experts on it.
Yeah. You don't go for a Nandos? No. Oh no. I do, I do go for cheeky Dando. I love a cheeky What level? Level [00:04:30] of, uh. What level of sauce do you go for? I'm a, I'm a medium, medium high. Medium. Medium High. Medium's good. I'm a medium. Medium. I'd like a medium, but I do, I am. I have been known when my wife is along with me to go for a lemon and Herb. I was gonna say, 'cause Herb has an H in it. I understand. Le Lemon Herb. Whereabouts? You, I have no idea what you're talking about. You don't know what Nandos is. We're gonna have to edit all of this out. Oh, we'll leave it in. No mind this entire preamble of the podcast is all in the bin because has, has let the show die. So, Nando. Okay, so this is, well maybe [00:05:00] there's some miss also that, that don't have a, okay, let's Chipotle is in computer science terms. Hmm. Computer Chipotle is a projection of what Americans think Mexican food is. So it's not really Mexican food, it's just their projection on it. And then Nandos is a Portuguese, south African. Perspective on chicken. That's right. Yeah. That is now well known. And Perri peri sauce.
Yeah. In the per, specifically per sauce, PERI, peri peri sauce. So you get, you get [00:05:30] basically a chicken sandwich. Mm-hmm. But then they have all the sauces in different, you can go pick your sort of desired heat level basically. And it starts at lemon and herb and it works its way up to like, hurt your face.
Yeah. And, uh, it's very well known. It's not just your face. Yeah. And the, the nearest Nandos from me in Portland, Oregon is probably 900 miles away. You know, a thousand plus kilometers, I'd have to go to Canada to get a, to get a cheap Nandos. But when I'm in South Africa, we stay in South Africa for about a month every year. And I go to Nandos regularly. [00:06:00] It's all over the uk. Good stuff. So it's, if you see Nandos just run, don't walk. Yeah,
right. On that note, let's pivot back to Ignite. Um. Right. We did, we did a number of themes yesterday, Robert. Let's, we did indeed. Let's just refresh ourselves. We talked about agent 365, plethora of new subject specific agents.
The, uh, the, the a kind of higher visibility of security and trust, the scaling and adoption and this whole kind of [00:06:30] emergence of this term, frontier firms. Yes. Uh, so there's a few other things in there as well. Um, IDC actually did a great report on global study of how frontier firms are transforming business. And if you go to the book of News for Ignite, you can download the report. It's pretty good. There's loads of stats and uhhuh because it tells you the where you are and goes to your point about adoption lag. And you can see some numbers and some graphs on that. Um, Azure got some updates. So in Foundry there's lots of AI to help. There's some new database announcements around that. [00:07:00] Infra has been updated by embedding copilot into Azure command line and things like this. So that's going out as well. Edge got a load of updates that sort of snuck under the radar a bit around that. Huge. That's a browser, isn't it? Yeah, the browser. Not Edge Edge, but the Edge browser. And then, uh, loads of security updates. Too many to mention right across the whole thing, which is going back to that responsibility, control, et cetera. And Windows got a lot of announcements as well about this sort of. Um, copilot being embedded in there for your Agentic Os experience.
Very good. You know what I would find [00:07:30] valuable would be to cut through the, the BS and talk about like, what does it mean to be an agent? Yeah. Terrific. Go for it. You know what I mean? Yeah. I, I, I had a, a young person come to me over at and ask me anything that I did over there, uh, just in a couple of booths over, and they said, I, I, I still don't know what an agent is.
Oh, okay. And I thought that was a fair and vulnerable thing for them to say. Sure. And I feel like, uh, again, I come at this from a developer's perspective, Uhhuh. So like, maybe this is the right. Or wrong altitude. Mm-hmm. But I [00:08:00] wanna cut through kind of the nonsense. Let's go. So if I were to do some task, like let's say that I had a text file and I want, I mean maybe A-C-S-V-A commerce separated file and I might, you know, suck that into Excel and do some stuff. We've all messed with CSV files before. If you copy paste that CSV file into chat GPT or copilot, and you say do this thing, then they do it and then you copy paste it out. Right. So you, the clipboard is now your integration. Mm-hmm. But then if you wanted [00:08:30] the LLM, the large language model to have. The ability to see your files, you would give it the ability to like read a string and write a string, or read a file and write a file.
Mm-hmm. You would give it tool calling. Yep. And say, all right, you're allowed to do these two things. You can read strings from files and write strings from files. So then it might be able to suck that in. But then you ask yourself, Hey, computer scientist, why wouldn't I just write a PowerShell script?
Hmm. Why wouldn't I just write a CR job and a bash script? Right. So the question is, is there ambiguity [00:09:00] in the task? And that's how I am separating, whether it's an agent or it's a batch file, whether it's an agent or it's Aron job. Okay? Agents are good at ambiguity. They can make tool calls, meaning that they can call this executable or can talk to Excel.
So if I had some ambiguous task, that's a great opportunity for an agent to kind of deal with that ambiguity. Hmm. But if I had an unambiguous task, it's the same comma separated file. Every hour on the hour, I'm not gonna run a [00:09:30] scheduled agent. That's silly. So in your head, when, when there's the debate about what AI is and whether it should be treated, it should have its own HR department, or whether it should be treated as like a semi-conscious thing, are you recognizing that as a valid. Discussion around it, or in your head? Is it a, is it a batch job that could deal with ambiguity?
Well, so first, I don't believe in anthropomorphizing agents. I'm so glad somebody's finally said this. I, you know, I've been on about this for ages. [00:10:00] It's just, it's not a person. Yes, it's not, you know, it Anna from HR or you know, Frankie from the data department. It's not a person. It shouldn't be treated as a person. Agreed. It shouldn't have a face. Um, it is the, I want the computer from Star Trek. Yes. Right? Because on Star Trek, when they're solving problems, they're vibing with the computer. They're brainstorming, Hey, computer, could you run that test? Hey, computer, pull that data in and project it this way. Show me that graph. You never have the computer go. I'm gonna go ahead and [00:10:30] interrupt you right there. I've solved this. Right. We, we solved one. And actually, I didn't like your tone of voice. Right, exactly. So I've brought in the HR Edge to have a chat with you. So that's, that's not the Star Trek future that we were promised.
Yeah, yeah. So I tend to have like, conversations with it, but I recognize that I'm talking to myself in the mirror. Mm-hmm. Right. It's, it's rubber ducking, right. So it's, no, it's not a person, it's not a face, but it is. I draw the line on is it an ambiguous business task or is it an unambiguous one? So saying something like, [00:11:00] go through the CSV file, it might be malformed and fix it.
Yeah, yeah, yeah. Is different than do this thing on a scheduled task. Yeah. Because I want it to be reliable. So I might ask an AI to write me a batch file. Mm-hmm. And when I say batch file, I mean, you know, logic app, PowerShell script, whatever. Yeah, yeah, yeah. And then remove the ambiguity. Right.
What, what questions would you ask yourself to find out whether it is or it is not ambiguous.
I.
I think human judgment is the most [00:11:30] important thing that we have and the most important thing in the next five or 10 years. So I would think about what are the pieces of context that I have as a human who is running this business that the AI doesn't and can't have? Hmm. Right. And I, I feel like we see big picture, like no amount of an infinite context window means that it understands all the conversations I've had and all of the personalities and all the things involved in the business.
So I would look at. What I know about this business problem that we're trying to solve and say, [00:12:00] okay, um, you know, we're gonna be sucking data in from this, this super, this, uh, supplier. Mm-hmm. Alright, help me, AI work through that problem. Okay, cool. Now you've helped me work through the problem. All the ambiguous bits have become concrete.
Right now. Let's codify this and then I would make tools. Just like read string and write stringing that are now business tools that are reliable business tools, Azure functions. Hmm. You know, and then I would have the agents be, [00:12:30] I would like the, I would like the agents to be as small as possible so that they are specialized. Hmm. Right. And doing the ambiguous bits and dealing with the fluffy, unclear parts. But the tools then that they call do things reliably. Oh, gotcha. That's the power and that, but it's that, that starting to touch on the sort of discussion that's going on about where does orchestration go? So you are orchestrating a load of things together.
Yeah. I'd be really interested in your view on the reality of orchestration. I know you said you were doing like the vibe coding, but the what is, what [00:13:00] is the future of what developers become? Yeah. And how are they gonna orchestrate these agents that you talk about and how do they deal with that from a, how should we be thinking about it? 'cause you've got a, you know, a really good perspective on it. Why does an orchestra have a conductor? Right? Yeah, exactly. Mm-hmm. Right? Like why is it? Is it tradition? Is it, so if I'm the conductor managing this orchestra, what am I there for? I'm there to keep the beat, but I'm also here to set the tone.
Yeah. To set the culture and all the kind. I don't want agents setting the culture. I don't want them setting the tone. [00:13:30] Right. You don't want some rogue violinist to take control. Right. But I do like. Or podcaster. Uh, but I, but I do like that I'm getting more code written. Yeah. That, uh, because the toil, like, I, I like to say that robots should do the work that is dull, dirty, or dangerous.
Mm-hmm. Right? So it's like, Hey, bump, bump this version, bring in the latest, get packages and do a security sweep. And I'll be back in five minutes. Yeah. Send that off to copilot. I'm gonna do this [00:14:00] fun bit of coding. So I see myself as a delegate. A delegator, right. Of four or five agents. But that's about as many as I can keep in my brain because now I'm multitasking uhhuh and I don't want to do more work poorly because of my own inability to, to context switch. Right. So we have to acknowledge also that in, in becoming a manager, like we've all probably been individual contributors that became managers. Yeah. Mm-hmm. And then suddenly we learned that the real work of a manager is context [00:14:30] switching. Yeah. Right. Do you see the, do you see that skillset being similar, so let's say you are working with a team of devs versus a team of agents.
Yeah. Is the, is the delegation process and the thought process about how you're pausing that out similar?
That's a great question. Let's think about a team that is five humans with dreams and mortgages and feelings, and five agents that are specialized and don't have names or faces. Mm-hmm. Yeah. I. Would [00:15:00] probably not want to say, you know, Hey Robert, you know, I'm really sorry to bother you.
Like, can you do this, this the CSV file. I need you to parse it. I know it sucks. I mean, that, that is what it's like Esme, you, you know, Esmee, I know it's no fun to do the versioning stuff. Would you manage that? Like, I want make sure you're doing fun, interesting work. Mm-hmm. I wanna be respectful of your work.
I will give those to the agents without fear of, of judgment or concern because that work is dull, dirty. And boring and tedious. Right, right, right. So [00:15:30] I would like my team of five humans and five agents to have the agents doing the yucky, ambiguous tedium. And I would like the creativity and the humanity and the interesting business problems with context to be done by my five humans with hopes and dreams. That Is that a, is that a microcosm in your mind of the AI contribution to sort of broader industry in society? Eg. It's heading in that. Kind of second age of enlightenment sort of direction. I think that's a little overly positive [00:16:00] because there is dull, dirty and dangerous work being done by human beings with hopes and dreams and mortgages every day.
Mm-hmm. And we need to make sure that they have work to do if agents go and do that work. So right now we think about agents in the context of knowledge workers. Right? Right, right. Like there's just a lot of really yucky work happening right now that's copy pasting from one Google sheet to another, or from one M 365 SharePoint instance to another.
Yeah. And a lot of that work will go away. The question is, [00:16:30] was that a valid job in the first place? Right. Right. And, and, and just to extend that slightly, there's the conversation isn't there about early career? Mm-hmm. And. If, if, you know, you go into law or something like that, you spend a great deal of your early career mm-hmm.
Effectively crunching information, effectively, pulling stuff together. You're gru work, doing research, doing grunt, doing the grunt work that ultimately kind of underpins your experience and skillset. Yeah. So, and, and there's a debate isn't there about, like, it, it, some of it is a naturally [00:17:00] AI friendly as we currently understand how agents are gonna function and how copilots function and things like that.
Um. Uh, how, how do we square that circle? So how do, how do, how do we educate our next generations to be able to come up through a very disrupted industries to ensure that when they get to the top of their professions, they're still able to function at a level that we want 'em to function?
I think people need to have hard conversations that are, have less buzzwords and more [00:17:30] humans in the loop.
I think we need to acknowledge that. The reason that I have a 30 plus year career in tech is that I was not kicked out of tech. I was not bullied out of tech. I was given space to grow. I was given interesting work and if an AI is gonna chew up some young person and dehumanize them and make them feel bad, and then they get laid off and then they don't work in tech anymore, how are they gonna have a 30 year career in tech?
Yeah. If we have a person coming out of university who, uh, gets chewed up in three [00:18:00] months and they're like, oh, this sucks. And they're like, I'm gonna go do something else. They're not gonna have a rewarding career in tech. Hmm. So we need to acknowledge that early in career people aren't being given the space to learn because we're spinning so fast.
I think that we need to have more, uh, we need to put more honest on the senior engineers to grow the juniors. Hmm. Like if I've got five or 10 years left before I retire, I need to be focusing exclusively on growing the young [00:18:30] people because that 25-year-old is the next vp, the next podcast host. What do you, what do you think esmi?
I absolutely agree and I, I don't know if he emphasized that enough. 'cause. Being senior also means helping others grow. Yeah. And hopefully make them better than you ever were.
And, and there's a bit about, when you talked earlier about the toil, like the versioning, I think you, you still have to do that yourself, and it is boring and it is tedious, but if you don't understand how it works underneath, you can't build the knowledge above.
So there is some. Form a foundation. Well, [00:19:00] maybe an agent could do it, but we maybe pushed the agent aside and said, no, you have to take this task on. I, I just did this a MA over here, I was mentioning before, and I used this analogy and they laughed, so it must have worked. And I said, you know, those medical shows where somebody's choking and they, and someone says, does anyone have a pen?
Oh, yeah. And they gotta stab somebody in the neck with a pen, a nice tracheostomy. You love that and drama. And then you get, you know, there's always some senior doctor who's like, who's, who's Robert? You're a first year. Oh, this is my first day on the job, sir. Hey, have you ever stabbed anybody in the [00:19:30] neck with a pen?
This is your chance. I, and I did think about doing it today, even the other day, but yeah. Yeah. Well that's just, yeah. That's just bitterness. That's just violence. That's just violence. Yeah. So, so that, that, that moment, like someone's dying on the table. Yeah. And we're gonna stab in the neck with a pen.
That's a learning opportunity for this young person. Yes. But what happens now is that a website goes down, or CloudFlare or something. Oh. Get outta my way, I'll do it myself. Yeah. Right, right. We need to take every single moment of those opportunities.
[00:20:00] Would you say that and grow, that's one of the biggest cultural patterns that need to change in the developer community? Or is it something else? I think it's just the business community. Hmm. Like all this talk of multipliers. Mm. If I, if you're gonna make me 20% more efficient Yeah.
Then I should have Fridays off. Hmm. Yeah, that's never gonna happen. Yeah, that's, yeah. You know what I'm saying? Usually not part of the conversation.
So we're talking about 10 X developers. Does that mean I just come in at the first of the year and then I'll see you in March? Oh, that's I'll. I'll work Monday 10 15, and then I'll have the next nine working days off. See [00:20:30] you Monday again. So if I should be able to do that work and then spend the other times helping the young people.
Get those experiences. You've gotta stab somebody in the neck with a pen. Is it the love, the, the mantra out in the back of this, this guy, I, I think that's the name of this episode. Tell me I'm wrong. Tell me I'm wrong. The, the, so I, I think we philosophically very much in agreement. I wonder though, I think one of the things I see in the, in the scenario that you are describing is that the space to.[00:21:00]
Do the teaching, the space to allow people, the space to do the learning gets taken as an efficiency Yes. And closed down. Very well said. I like that. That's a great way of putting it. And, and it's organizational willpower. Mm, that's, that's right, that's right. That is required for us to say, Hey, you know, Dave, it's okay. You've been here three months. No one's expected to know what's going on. Mm-hmm. Um, a lot of the stuff that makes me senior. For lack of a better way of phrasing it is I'm old and [00:21:30] I've been there and I have historical context. Yeah. Now we see people on stage doing demos, showing AI generating stuff. I don't see enough demos of them asking AI's questions. Hmm.
I had a young person on my team corrupted their Git repository and there was a carriage return line feed. Issue and it was a, you know, a Mac versus PC versus Linux kind a thing. Oh, I love, love a bit of char conversion. Come on, come on. Right. The call. Yeah. So, so they, a Marcel just got tense at that. They said, they [00:22:00] said, what's a carriage and why is it returning? Right typewriter. So then I like, well, first I said, may I. May I tell you, you gotta ask permission. I'm not gonna just launch into a lecture. Yeah. I said, all right, gimme the medium sized version. So then I started talking about carriages and typewriters and askie and like the, the last 50 years of character in coding.
And they thought, how do you know that? I was there. Right. And you're gonna be there for this thing that we're doing right now. Like when they're 50 plus, they're gonna know which model [00:22:30] to use and they're gonna know the history of, remember when AI happened back in the, in the early naughties or whatever we called it, you know, Uhhuh, that that kind of stuff needs to happen.
We're not giving people space for that. So at Microsoft, we're trying to make. Those relationships more formal. That's cool. Where senior engineers can teach juniors. One of the things you mentioned, uh, earlier on in the conversation, I just wanted to go a little deeper on, because it is a, it is a conversation that's touched on as whether it's real or not, or whether it's productive or not, is, is vibe coding. And [00:23:00] I, and you were mentioning just before we came on that you'd spent a bunch of time thinking about it and experiencing it. So first of all, frame up. Vibe coding for us, for those of who might not know what it,
no, who coined the term? Where did it come from? I for, forgot I, I wanna say Alexi. I can't remember the, I forgot. It's Alex, Andre, you, you brought, what's the name of the guy that did vibe coding? He came up with the term good use of an agent, Andres Carpathy. That it, yeah. Yeah. Sorry. Co-founder of OpenAI. I forgot. I keep thinking Alexi. [00:23:30] It's Andres. So. Vibes are indescribable. It's like, Hey man, it's just vibes. Right?
Yeah. It's kind of like the dude in, uh, in the leki leki of film. Yes. Yeah.
Fantastic. So vibe coding into production is not a thing. Mm-hmm. That means if you're vibing and vibe coding, you're just talking to the machine and you don't look at the code because the code doesn't matter. The re the result matters and there's a [00:24:00] belief by some.
That we will be able to vibe all the way into production. Right? Um, like if you write C today, it turns into an intermediate language that no one looks at, gets run through a runtime, and then executes. If you vibe code, you speak pros. Mm-hmm. You speak paragraphs that English then gets compiled, transformed, mm-hmm. Into react C, whatever, that you never look at. I think it's run by the JavaScript runtime or the [00:24:30] C sharp runtime. Right. And then, and if it just works, it works. Right. But the thing that's so funny is that people then start talked about prompt engineering. Well, you should be more specific in your prompt. Really. You should be really specific. Like you should use certain keywords. Like maybe we could formulate that and turn that into a programming language. I was gonna say, so what I see where we're going, can see where we're going. Be a bit like code, wouldn't it?
Right. So the idea is that. In any business or any computer problem, specificity matters, right? Um, if, [00:25:00] uh, there's an old programmer's joke that says, uh, I want you to go to the store, I want you to get bread. And if they have eggs, get 12. And then you come back with 12 loaves of bread, right? Mm-hmm. And you say, why did you come back with 12 loaves of bread? Hey, they had eggs. That's what pizza said.
Yeah. Right? So even in English, there is a lack of specificity. That would be an example, callback to ambiguity. Can human judgment or agent judgment deal with that kind of ambiguity? [00:25:30] That's a challenge, right? So if I make something ambiguous when I'm vibing, if I say, write me a calculator, wrap. Well, most calculator apps are pretty unambiguous.
Mm-hmm. So I could do what's called a one shot, make me a calendar app. Okay. Boom. Here's a calendar app, Cal calculator app rather. And it'll work. Probably it will become a statistical mean of the most average, the most mid cal calculator app. Right. But if I try to, one shot a company, [00:26:00] every single ambiguous, non-specific thing. They have to roll the dice. Yeah. You didn't mention the logo. Roll the dice. You didn't mention if you wanted it to work on mobile or not roll the dice. Mm-hmm. So every single thing you're not specific about. So then people started doing specification driven development where they vibe a markdown file, which is the specification, and then they take that, and then they feed that into an LLM, and then they say maybe that's specific enough.
Right. Right. But nothing is specific [00:26:30] enough until you get to code. Hmm. And ultimately you have to know how the system works underneath so that when something, some edge case comes up Yeah. And it doesn't work. Uh, I don't know, man. And then do you just have a million monkeys with a million typewriters try to solve that for you or do you use expertise?
So another analogy that I like to use is I asked you earlier, before we started rolling, if your son knows how to drive stick shift. Yep. You said that he does. Yes. Can he change his tires? Can he do change his own oil? [00:27:00] Uh, tires, maybe oil. I'm not entirely convinced we can go there, but he's learning about it. Yeah. So you can agree that there's value and it changes your relationship with the car. Hmm. Yeah. But here we are in San Francisco and there's these, these freaking Waymo cars driving around. Mm-hmm.
He's very, Rob is a very big fan of Rob. Never getting in a Waymo. Really? Not a chance. I know too much. You know, you know, oh, you, you know, far too much and I value my life.
But, but the point is the relationship with the car. If you built your [00:27:30] car in your uncle's garage and changed the oil and drove it. Yeah. Yeah. That's a Jedi building their own lightsaber to mix my metaphors. Yeah. Yes. But with a Waymo, the Waymo breaks down. You get out of the Waymo if it lets you out and then you get another Waymo and you, we, we have to learn more about what you know about this. I'm just situation, just saying, the point is if you get an Uber. A, a human being with hopes and dreams and a mortgage and a car and insurance brings you a car. If the tire falls off that car, is it on the rider's or is it on the [00:28:00] driver? Yeah. Well, no, I get outta the car and I say, good luck. Go with God.
And then I get another Uber. Mm-hmm. Yeah. Okay. I think that it's important for us to know. The abstraction layers. Yeah. Because to, to it's, while it's delightful and it's amazing that, that people with mobility challenges can now drive and get around with Ubers. That's amazing. Like it's opening things up from an accessibility perspective. Do you, so it's also problematic.
Yeah. So in your. If I understand the, the analogy correctly and relating it [00:28:30] back to vibe coding, you, you're talking about levels of abstraction, which give you, uh, a, a sense of being able to interact in a different way, but you've gotta drive that down. There's a sense of power.Yeah. Until there's not. Yes. And, and you need to be able to. Uh, go deep. Does go deep. Somebody does. Mm, I don't think everybody does. So you think vibe coding, despite the, the, maybe the misjudged sense of power, the lack of specificity has a place, but there needs to be engineers that can go deeper.
I [00:29:00] feel very strongly while recognizing my generation and where I came from and all those privileges that. While I can hire a guy to mow my lawn, it's important to mow my lawn occasionally. Yeah. Right, right. Uh, I can hire a guy to replace a toilet, but I felt very powerful when I did my own plumbing mm-hmm. And swapped my toilet out at the house, you know what I mean? Mm-hmm. And, uh, you know, just even putting together IKEA shelves.
Yeah. Yeah. Okay. And your marriage is still in place. There's, those are, those are examples of not [00:29:30] floating above everything. Yeah, yeah, yeah. I don't think that you can vibe all business. So again, we get back to this idea of toil. Why are we here at Ignite? Why are we making these products? Why does Vibe coding and GitHub copilot exist for the boring bits, right?
Right. For the tedious stuff, text boxes over data. Yeah. Is not computer science. It's a solved problem. Yeah. And we can, we can do really amazing stuff with that. That's what I see all these agent IQ things for, because now I can do fun stuff. Mm. So like I'm [00:30:00] holding, we're on a podcast Right. And no one can see us. That's correct. But I'm holding a, an Altair, which is cool. Which is an Altair 80 80, uh, mini. It's a, a simulator. For those who, who, dunno what an altar looks like. It's basically a, a sort of a, a six inch by 18 inch, uh, box with switches and lights on the front. Right. Like a, a whopper that's been shrunk from the Fillmore guy. It's a, it's a, it's a mini computer, a micro computer from the 1975. And, um, I've been doing vibe [00:30:30] coating with the al. Hmm.
Because I wanna learn about it, but I also don't have a lot of time. Yeah. Yeah. So I can ask it to do things and I can learn assembler and I'm, I'm not just vibing, I'm saying, okay, what did we just do?
Hmm. I have an infinitely patient tutor to explain things to me right at late at night. You know, it's 2:00 AM and I don't know what's going on, and I've got a code review. Ask, ask the agent, ask. And they're not tired, they're not grumpy, they're not, they're very, you're absolutely [00:31:00] right. Right. Yeah. They're very polite. They're infinitely patient.
Well, I think on that note, uh, let, let's bring the con. The conversation to a little bit of a conclusion and, and we've been asking all our guests this week because of the, the nature of the real world that we are living in. And you alluded to this a little bit when you were talking about the computer from Star Trek at the beginning, that the, the worlds of science, fiction and filmic versions of our industry.
We're almost intersecting some of the sci-fi ideas, ideas at this point with, [00:31:30] I mean, we haven't got to fly in cars yet, but self-driving cars and agents talking back to us. So when, when you think of the sort of fictional representations of our industry, what films stand out to you as either being a high point or a low point of how we've been represented as an industry?
So, so Sandra Bullock and the net would be a low point. No, that would be a high point. Oh yeah. No way. Double click on, yes. That's a fantastic Pentagon. That was great. I think also, uh, hackers, oh no, was also [00:32:00] high, a high point. Uh, Johnny Monic. Oh, Johnny Monic a high point. I don't know where, what you were, your damage is. Um, I, I, I think, uh, even though the user interface is a ridiculous minority report. Hmm. Oh yes. Yeah, yeah, yeah. Felt, felt pre felt good at the time, but obviously, but let me, let me pick on a specific one. Yeah, go. That I think is important because I want, I want you to understand my perspective on what humanistic AI should look like.
Arnold Schwarzenegger wakes up on Mars and he [00:32:30] is got a giant screen. Of giant, you know, basically a television turned sideways. Alright. And it says Good morning, welcome. You know, uh, you know, I noticed that you urinated earlier and, uh, it was a, your protein was elevated. I'm gonna go ahead and call the doctor.
I looked in your fridge, your milk, you're out of milk. We're gonna go and buy you more milk. Like we have the technologies mm-hmm. Where if someone thought that was a good idea, that could be done. Yeah. Yeah. But I think we can all agree that while it was fun in the eighties. It is the uncanny valley. [00:33:00] I agree with a unbelievable amount of privacy concerns.
Mm-hmm. Creepiness, the uncanny valley is where you go. Oh my God, that's amazing. Oh, that's horrible. Oh wow.
There, there's something just so, so deeply invasive about that. 'cause effectively that kind of scenario is almost like notification culture gone mad. Right? It's like it's the It's the stuff I switch off on my phone. Exactly. 'cause it drives me insane. So we don't wanna make those. Hmm. So the uncanny valley is when, you know, that's kind of creepy, you [00:33:30] know, uh, someone's like, you know, I wanna put a pin on my shirt. That's an ai and it sees everything. Yeah. And, and we vote with our feet and we reject that. Mm-hmm. Yeah.
It's like, they're like the glasses that film and record. But five minutes ago, I very naturally ask Siri to answer a question that I wanted to know. Hmm. And it gave me the answer. Yep. And I can also do things like. What's my blood sugar? One four. Hmm. Well my blood sugar's 104. Right. But like that's highly valuable. Yeah. Yeah. And [00:34:00] if you know, you know, if I could say, and I could also say, remind me when I get to Fred Meyer to buy milk. Okay. I added it. So now it actually is geofenced, so that when I get to that store Oh, that's good. It will pop up and say, yeah, that's good. Get milk. Yeah. Yeah. Or I could say, remind me when I get home.
Like those are little magical moments. Mm. So if I can say, while I'm brushing my teeth, Hey, are there any important emails that I've forgotten? Oh, you've got tickets to whatever. Oh, shoot, I gotta reschedule that. Hmm. That's delightful. But if it became [00:34:30] creepy, I'm gonna reject it.
Feels quite playful. Like I've been hearing you talk now and it's quite, I'm quite playful. Yeah, but it's also for developers. What do you think? Is that a skill that, how do you learn to be more playful?
Because when every time you start writing something, you need to go, why? Hmm. Why am I making this thing? Hmm. Am I doing it for the people, but not with the frustration? Like why, why am I doing, like, do you know? It's why Curiosity, curiosity, enthusiasm. Like, uh, I talk about it, uh, this is a small plug. I did a TED Talk. You [00:35:00] can go to hanselman.com and watch my TED Talk. And I talked about in 1984 when I got my Commodore 64 Oh, it was best selling personal computer of all time, I'm aware. Yeah. Yeah. And then it's back commodore.net 299. Yes, it is. You can buy it at the same price. Um, seriously though that was like, I can make anything. Hmm. So I'm not looking at ais and agents as a, here's how I can 10 x this and 10 x that, and capitalism, this, capitalism that I'm saying what delightful things that I can [00:35:30] do to make my job, my day, my experience, my learning.
More fun, more delightful. That's what I wanna make. That's what I think we're trying to make. That's why I talked to GitHub copilot. I did a 45 minute vibe coding session with no cuts, no edits on my YouTube that you can watch. I did not open an a text editor. I did the entire thing with my voice. Did it work gain? It absolutely worked. Let's go back to the notion of, of fun and AI and where you were going with. The filmic reference. So go, go a [00:36:00] bit deeper on how you think AI is represented fictionally at the moment, and whether you think it's getting us somewhere closer to, or it's, it's still kind of informing what we're doing.
So every time an AI is presented in film in a problematic way, it's because it has been anthropomorphized. Yeah. It's 'cause we've pretended that it's a person. Right. But the problem is if we name them. And give them gender and voices and have, have weird pauses and they breathe. Yeah. You [00:36:30] know, then we're gonna start treating it like a person.
Yeah, that's right. Like hell in 2001 when that popped up, wasn't it? It exactly. But at the same time, I don't want my kids telling Alexa to turn off the lights without saying Please first. Right. Yeah.
I think it has. I think one of that's one of the most interesting parts. If you interact more with technology, but you do not treat it as a human, what ends up. As doing in reality, we might forget to say hello. How are you doing? Exactly. So how do you see that? Learn about habits, isn't it? Gee, well, so well, is that, is that [00:37:00] though, is that us trying to remember like what it is to be just a good human to another human and therefore not losing that trait because you're speaking the technology so much?
Or is it, are you doing it so the technology doesn't hate you? Once it becomes self-aware and starts to hurt you down? What do you think? No, for me it's more. I, I think we're all, we should be more, uh, empathetic towards each other anyway, without a tech, although, and if the, but the tech can actually help us do that. [00:37:30] Yeah. So if we ingra the culture that we would like, we could actually have tech help us do that. But that's different than what you're saying, Scott. Well, I, I am saying, I acknowledge my hypocrisy because I'm being, I'm being hypocritical. Here. I am being two-faced. I'm saying don't anthropomorphize it, but also don't be rude. It's not, it's not a person. But I'm still going to say please, and I acknowledge that, but you're saying please to like, let say, would you say please to your computer, I'm saying please to myself. Okay. Okay. [00:38:00]
And the, the hypocrisy is it's not a person, I'm not gonna name it, I'm not gonna call it Jeff and say, Hey Jeff, how's it going? It's not my friend. I'm gonna give it tedious work like parsing CSV files. But for me to remember that I'm a nice person. I am gonna say, Hey, please copilot, could you do this for me? Yeah, thanks. Good job. I appreciate your work. And you got the standard capitalist response to that, didn't you? So the capitalist thing rolled over and told us to stop saying, please and thank you to AI 'cause it was costing us tens of millions of pounds a day. And it's like, no, I'm not, I'm gonna say please, I also reject that [00:38:30] they can cash my plea. Yeah, yeah, yeah. Put a little Redis on that. And, and it, it's funny the way that you've been talking about agents, because as I've gone on a. There's sort of a, a journey thinking about this myself.
I've got to a very similar point to you, which is, let, let's think about what these things are and the fact that it's effectively code running. Mm-hmm. Um, that, you know, they may have to pay taxes or we might have to pay license fees for them if we decide to [00:39:00] invent the world in which we've got anthropomorphized agents.
And, and in my head, and, uh, I value your thoughts on this in my head. Things like proper hybrid organizations become interesting, not when you've automated sections of process using agents in the way you describe. It becomes true when those agents become conscious. And I wonder this whole thing of like, do we, are we anywhere near on a road to that he lost when they became [00:39:30] conscious.
I don't think next token prediction. Yeah. Gets us consciousness. Yeah, exactly. Yeah. I agree. I agree. Right? Like, uh, I do this demo, uh, where I say it's a beautiful day. Let's go to the Mm. And what's the correct answer, Robert? What's the It's a beautiful day. Let's go to the park. Park, park, park. See, see. Go to the beach. Go to Chipotle. Right. So two people here said park, one said beach. What's the right answer? Well, those are the right answers for you. And that's the right answer for her. [00:40:00] Yeah. Context. It's not a fact machine. Right? It's a next token predictor. Yeah, exactly. Does that give us, so then the question is, what I'm about to say is that the statistically most likely next token for me to say, am I just a statistical model of the last 50 years of Ansel Man?
Right. Maybe this is a model and somebody trained it on my podcast and a thousand episodes in my podcast. Yeah. And now I'm just saying the obvious next thing in, in, in one of an infinite number of, of simulations, humanities is more complicated than next token prediction. Mm mm So, no, I don't think they're gonna [00:40:30] be conscious.
I think that they, they're no more conscious than your aunt's parrot. Right, right, right. Which is weird when it speaks whatever language it speaks. Yeah. But, um, we should still be nice to it. We should have empathy for towards it and towards others. And so the technical path that we're on, even though we passed the Turing test some time ago, the technical path we're on is not a path to, you know, kind of a GI in your mind? I don't think so, but I also don't have a PhD in that [00:41:00] stuff. Right, right. And if anybody here does, I would encourage them to, no, there's a, there's a big debate about if it's perceptively intelligent and it's a box and I ask it a question, it gives me back the right answer and I can query it to a level where it. It, I, I can't defeat it, but there's a horse on TikTok that can count. Mm-hmm.
Well, yeah, but no, it's not far before it becomes perceptibly un int intelligence. Whereas this thing about what you perceive and what is actually going on in the black box is a big debate that's going on about if I, if I ask it and it comes back and I believe it, I've perceived it to be intelligent, but [00:41:30] Eli. That runs on Eliza. If you've ever, you're familiar with Eliza, the original AI that was done as a psychologist? No. Go. Right? It was not first. The very first It was a woman actually. Yeah. Yeah. Right. So Eliza was a tell me about It was a therapist. Yeah. It was a therapist. Right. Right. Tell me about your mother.Right. It was, it would, it would just ask open-ended, ambiguous questions. You know what fascinates me that it's a woman? We've been talking about it as well. 'cause I've been to, uh, women as technology. Mm-hmm. Which is an, uh. A, a complete layout in a [00:42:00] museum where you see how women influenced technology and the other way around.
Yep. And it's fascinating that they started to put feminine elements in it. Yeah. Why is it a woman? Well, I appreciate that you call that out 'cause I alluded to it briefly, but I think it's worth dig digging a little deeper is that for, for a moment, there, for a moment, in technology time, we had Siri, Cortana, Alexa, we had all these women that we were shouting at in our homes to turn off the lights.
And that's [00:42:30] not okay. Sounds like home, right? Yeah, exactly. And that's not okay. It's not okay. I agree. Yeah. And simply saying please is one thing. And even though chat, GPT is a brand that got lucky, no, no marketing expert got in a room and said, you know, we're gonna call it chat. GP gbt. Yes. Chat. How about chat Generative pre-trained transformer, no chat, GPT, right?
Which in French sounds like. Cat, I farted. Right, right. Shot GPT. Um, [00:43:00] but, uh, true story, like they just come up with that name, they put it out there and it just became like Kleenex or like, it's just a word that means, you know, Xerox. Right. It's like they, they took a brand and it became a verb. Like I Google with Bing. Yeah, right. I'm just Googling, right. I'm searching the web and that degender things. And then suddenly we, now we have copilot and Gemini and chat pt, and I think that without us consciously removing. These yelling at women things. Now we have a [00:43:30] gendered it, it voices and a gendered agents. And I think that's a good thing. I think so. Yeah.
It is interesting, isn't it? Because we, we've talked on the, on the show too. Uh, one of the, one of the guys that was in the room when things like clippy and stuff like that were, were first. Introduced and of course the concept at the time, which wasn't driven by a piece of tech that just happened to be named a, a certain thing that was representative of the tech that were coming at it from a, Hey, what would we be like if we invented a virtual assistant?
It's [00:44:00] almost like that age maybe is we're past that age. Do you think, or do you think we're still getting pulled back into trying to create these, you know, kind of human-like assistance?
I don't think that we are far enough away from being cave people. Mm. That our evolution, which has been largely intellectual over the last couple of thousand years. Mm. But the cave people are still inside us. Right. We're just two paychecks away from chaos [00:44:30] and anarchy right now. Right, right. So we need to be really like clear about how like civilization is a couple of thousand years old. And we are just now domesticating animals and treating them in a certain way.
We don't yet have a word or a thing for the house. Right, right. That you talk to and you say, Hey, turn the, turn the the temperature down, or turn on the tv. Like we're still learning. Is that a pet? Is it a partner? Is it a friend? Mm mm And now we're seeing, you [00:45:00] know, people having potentially problematic relationships with them. Right? Right. That is showcasing challenges that they have with their humanity. Mm. That's not gonna be solved with technology. So I think there's a lot of space for sociologists, psychologists, non-technical people to help us explore this. This space, the UI for AI is something that is not, uh, where a bunch of guys in the bay. Are gonna tell us the right way to do that. They are not trained for that. Yeah. Nor am I, it's almost evolutionary. [00:45:30]
It's, it's the Overton window is shifting and it may not be shifting in the correct, in the correct ways. Right. But I am gonna keep pushing for. Tools that put the human in the loop, put the human first.
And every time I make something, I think to myself, did this make someone's life better? Yeah, that's my line in the sand.
Well, [00:46:00] on that note. We end every episode of this podcast by asking our guests what they're excited about doing next. And that could be something in your professional life, or it could be something in your personal life. So Scott, what are you excited about doing next?
What am I excited about doing next? I, this is gonna sound awful because each morning I think about lunch. And how I'm gonna get to lunch and the meetings that stand between me and lunch, that's entirely valid. You, you must enjoy lunch. I'm gonna, [00:46:30] I'm gonna go with You're a lunch guy. I'm just saying lunch is very representative. It's like, you know how Wednesday is like hump day? It's like the lunch is, the lunch is the middle part of the day.
So once you make it to lunch, it's all downhill. Uhhuh. So you, you walk uphill to lunch. I get very excited about whatever sandwich I'm having that day. Yeah. And then it's all downhill from there. I discovered at lunch the other day, the snickerdoodle, which apparently is a thing in San Francisco. Oh, snickerdoodles are American Classic. Wonderful thing. You've been hiding it from US Europeans for so long. I don't understand why you've done this. So you, you guys are all having those, uh. Those, uh, those, those [00:47:00] dusty scotch butter, scotch, uh, uh, what are those little things that you have? What are those? What's the Scottish butter? Uh oh, uh, shortbread.
Short shortbread. Yeah. Yeah, yeah. Short, short. You're just like, you, you pretend to like those. I think there's a dry, I think there's a better, I think there's a better parallel to a snickerdoodle and a short bread. It'd be more like a, sounds like a Jaffa cake. It'd more a than apple thing, like pastries with apple.
And what are those things called? With a, with a cross cut pastry on the top. Whatever, whatever that is. Robert's just gonna dump it on his pants though.
It's true. You've remembered. Well, [00:47:30] good, good, good throwback. Good throwback. Look, Scott, man, always, always a pleasure to talk to you. Have a great time at the rest of, uh, rest of ignite and hopefully we'll might see you extra. Absolutely. I guess it's a tradition now. Yeah, absolutely. We locked in. Thank you all.
If you would like to discuss any of the issues on this week's show and how they might impact you and your business, please get in touch with us at Cloudrealities@capgenini.com. We're all on LinkedIn and I'm blue sky. We'd love to hear from you, so feel free to connect on the end if you have any questions for the show to tackle.
And of course, please [00:48:00] read and subscribe to our podcast. It really helps us improve the show. A huge thanks to Scott, our sound editing wizard, Ben and Louis, our producer, Marcel, and of course to all our listeners. See you in another reality next [00:48:30] week.