Exploring the practical and exciting alternate realities that can be unleashed through cloud driven transformation and cloud native living and working.
Each episode, our hosts Dave, Esmee & Rob talk to Cloud leaders and practitioners to understand how previously untapped business value can be released, how to deal with the challenges and risks that come with bold ventures and how does human experience factor into all of this?
They cover Intelligent Industry, Customer Experience, Sustainability, AI, Data and Insight, Cyber, Cost, Leadership, Talent and, of course, Tech.
Together, Dave, Esmee & Rob have over 80 years of cloud and transformation experience and act as our guides though a new reality each week.
Web - https://www.capgemini.com/insights/research-library/cloud-realities-podcast/
Email - cloudrealities@capgemini.com
CR109: The genesis of the virtual assistant with Kevin Surace
[00:00:00] People, people introduce me on stage, uh, and sometimes in podcasts instead of Kevin Surace, it's a long, Kevin Surace, they say. Kevin Sur Ace
Welcome to Cloud Realities, an original podcast from Capgemini, and this week a conversation show about virtual assistants with one of the [00:00:30] fathers of virtual assistants. I'm Dave Chapman. I'm Esmee van de Giessen and I'm Rob Kernahan.
And I am delighted to say that joining us on today's show is Kevin Surace is a futurist, inventor and AI disruptor, and as I said, was involved in the very genesis of what is living with us today, the virtual systems. Kevin, wonderful to see you. How are. Wonderful to see you. Thank you for having me on. It is a delight.
Now, whereabouts in the world are you? [00:01:00] I am in western New York State, uh, near Rochester, New York. Uh, it's a stunning, beautiful 80 degree day here, and no clouds and, uh, it, you know, there's beautiful summers here. I can't complain. Robert, you were there over the summer. I was. We were at, uh, New York and up to Boston, out to Cape Cod.
It was very good. I do like New York as a city in that area. It's got so much life to it. One of the great cities of the world, and you see so much diversity and interest. I always love GG going back and I still like the, [00:01:30] you arrive and you still think you're in a film. 'cause you've seen it so many times on a film.
You're like, I'm on a film set. This is Ace. That, that's, that's a Brit, that's a Britt's mentality of arriving in New York and I not just taking Wow. Big buildings. I share it. Big buildings. Yeah. New York City is. Is special, stunning shows and some of the best food in the world. Uh, no, no question. Right? It's very hard to, hard to argue that we're about six hours if we drive to New York City, so, oh wow.
That's quite way, way, then, way far away from New York City. Uh, and, and it's easier to fly. It's 45 [00:02:00] minutes or so. So, we'll, we'll pop down, uh, to New York a fair bit, obviously with Broadway work and such. Ah, yeah. Kevin, what's with in the background? Because the listeners obviously cannot see that you have, listeners cannot see that.
So there's a, there's a couple of things, like you see some. You see some awards, that's a drama desk award for some Broadway work. Some of these are film awards and then there's radios. Yeah. And you say, why are all these radios? There's some patents here too, right? One of these radios in the background. So there's a story about that.
So [00:02:30] back in the seventies and eighties, 60 seventies, eighties, right? Uh, before the internet, if I wanted to listen to a a, a news from Los Angeles, I might tune in KFI, but there's no internet. I'm in upstate New York. How would I ever get KFI? Well, I have behind me and then next to me about 25 long distance AM radios meant for something called am d Xing.
And, uh, every morning, [00:03:00] especially in the winter, uh, between about 4:00 AM and 6:30 AM here, I can get KFI, I think that's six 40, uh, am in, uh, in Los Angeles because of an ionosphere bounce. So the AM stations that transmit 50,000 watts will bounce off the atmosphere, come back down, and if I point the radio the right way, I can get stations from all over, from thousands of miles away on AM radio, on traditional AM radio.
So that's what these radios are. There were [00:03:30] only 20, 25 models made in a variety of ways, mostly by the company. Generally electric, which my dad happened to work for in audio electronics products, so that's why I knew about it. And so that's an old, old hobby of mine is, you know, what, what, what station can I get that's thousands of miles away that I shouldn't be able to get?
But I can because of the ionosphere. Well, first, sorry, you asked actually a, a much better answer than I expected. I just thought you'd be like Yeah. 'cause they look cool. Yeah, no, but um, [00:04:00] have you got all 25? I do. I have every single long distance AM radio that was made, certainly by ge and then a few that were made by just a few other companies over the last 30 or 40 years.
'cause mostly sort of GE did that and GE started that in about 19. Uh. 55 or so with the first long distance AM radio. Uh, and uh, and then just grew from there. And it became this whole thing of can we make radio sensitive enough and selective enough to pick out a [00:04:30] station from thousands of miles away? And the answer is yes.
And those clear channel stations that transmit 50,000 watts still exists today. They all still transmit 50,000 watts, and they are given a, what's called a clear channel, meaning no one else can be on that channel at night. They have to reduce their power to just a few hundred watts so that that clear channel station can make it thousands of miles away if you want.
It's, and, and, and in New York City, there are, there's, uh, let's see, uh, WABC, uh, and WNBC, and I think in Chicago it's [00:05:00] WLS. And, uh, so, and there are clear channel stations today. Very cool. As you've all, as you've already heard in the background, Robert and Esmer here, guys. Hello. Hey, you good? Uh, yes. Now, Robert?
Yes. What's confusing you this week? I'll tell you what's confusing me this week, Dave. As you know, I'm quite a, a good citizen on Google with reviews and Google Maps, and I'm one of the top. Oh, you are? I'm one of the top. You actually do reviews. I do actually review. I go and I'm actually one of the [00:05:30] top pub reviewers in the United Kingdom actually.
So I'm quite proud of that. So if I go to your pub, I'll review you. However, I use Google Maps a lot. You get the photos, you understand what you're doing, you know that, that sort of thing. But there's been this increasing trend on Google when you go to the Google reviews and when you're flicking through the photos of the place, people are uploading selfies of them and their family.
OB obscuring the thing you'd be interested in seeing. And it's just a photo of random people in the world who happen to turn up there. And I'm like, what led you to think that you should upload a picture of yourself to [00:06:00] Google? We should do that next time. What are you saying? They just, they, they don't understand what it means to be a professional reviewer, Robert.
Exactly. Either they're massively narcissistic or they've just completely misunderstood what Google Maps is about. I wanted, I want a useful photo showing me the thing, not a picture of you stood in front of the thing. I do not understand it, and it has massively confused me and it's picking up more and more and more.
You flick through. It's just a photo of a human and it's just like, what? This shows me nothing and why and are enamoured to [00:06:30] peopled. People are enamoured with themselves. Yeah. They must be because they're like, and why take a selfie in front of the waterfalls and I can no longer see the waterfalls? That's not the point.
Uh, I, posted a lot of photos on Google Maps. I love doing it. If I've got this beautiful landscape photo of that beautiful piece of landscape, right. Very useful. I want people to see what it looks like in the fall with these leaves. If you go. And then I'll get thousands of views and likes and all that because it's useful.
I don't put me in any of those photos. This is Google Maps for [00:07:00] goodness sake, but it's, it, it feels like social media, narcissism, bleeding over and awful. Not differentiating platforms, isn't it? Yeah. No, and the thing, the thing, the reason I do it is when you get the, the Google will tell you, like I once took a photo of a random McDonald's, uh, in the middle of nowhere, and uh, it was a dreary day.
Is that to go with your other photos of random McDonald's? Yeah. Yeah. I remember it, but there was no photo, so I thought I'll just put one up so people can see the setup. 'cause it's useful if they go in, they know what it is, it's now got something like. Four and a half million views. Right? And [00:07:30] it's just this random, but because it's helped people, so I get this sense of I've helped somebody understand the situation.
Like your photo in fall, say if I go there at this time, it's gonna be very beautiful. It's about helping others in society and that I don't wanna see a picture of you on holiday obscuring the thing I'm actually interested in, sorry. And I don't, and I think you might be right, social media, narcissism or just total stupidity associated with what they're doing.
Uploading the wrong photo or something. I don't know. Anyway, it's massively confused me. Dave, I'm gonna start posting photos of Rob actually on all my Google Maps posted, [00:08:00] just so you can find it and go, it's me. You should, you should take a picture of him mean just cut him in to all of the photos you say I, and just put me in every Google Review ever in the history.
History. Got it. Alright. Well actually we've got to the bottom of that. Let's, uh, let's roll right into the conversation. So I'm probably more on the lines of like, humans want to hear art and see art that's being created by other human minds, but it doesn't mean that. That art, artificial [00:08:30] intelligence created art or music.
It has its place. I just see it as a different thing. Like it sits alongside human work. Yeah. But you know, there's this, there's this, uh, uh, collaboration that's happening, right? So I can produce music with ai, create music with ai, export it to midi, export that to notes, hand it to musicians and a singer.
And what we will get is live to me, right? That singer's gonna perform this thing that was written partially [00:09:00] by ai. Right? Now, let me say this about that. We already have been using incredible tools in recording and music development. Much of the music you hear was not played by an actual musician. It was put in what's called a da.
And second, there isn't a pop recording in the last 20 years. That didn't use autotune or mead. Exactly. Why? Not? Because they can't sing, but because I don't want to stay in the studio for a week to sing one song. I can plow it out in an hour. They got what [00:09:30] they need, they'll fix it. Right. That's what we do to cut studio time down.
J Just as a, just as an aside. Where do you stand on Jump by Van Halen as a song, and do you think that AI could improve that song? This is Robert's favorite song, so where Ai favorite Improve on favorite? I, I like that song because it's a, it's a song when you listen to it. You can't be sad once you just put it on turns.
It makes you happy's a happy song. It's a, it's a great tune. You know, there's the question, can any tune be improved on? Well. Look [00:10:00] already. Other people can cover jump. Will they cover it? The, you know, if to us, the way it was originally done, it has certain, it's kind of like the Beatles, we hear lots of covers of the Beatles, but everything the Beatles wrote stands the test of time today.
Yeah, I was just talking to someone about that. That's what, you know, you hear something and you hear yesterday and you hear every other thing they did. Some of those were ballads in the sixties and you go, these are four kids from Liverpool that crushed it and they weren't the best singers, the best musician, it didn't matter.
They could write music all, all within like a five. A five or so [00:10:30] year timeframe as well. Yes, yes. And mind blowingly. Yes. And, and will any of you know Ka Perry's music be with us in 10 years? I don't know. Will most of the music written over the last 20 years, be around 20 years from now? I don't think it'll stand the test of time.
I mean, I think I, I sort of stand in the, some will, but it'll be a, it'll be a small minority and there'll be a lot that the evergreens, there'll be a lot that drops away. Yeah, maybe. I mean, Taylor Swift, look, Taylor did 2 billion in one year. That's more than almost any musician has done in a [00:11:00] lifetime. Oh, for sure.
So she's a hell of a business woman. Whether her music stands the test of time, I don't know. It works for the kids. She's shipping it to today, 20 years from now, I don't know. But I will say this about the AI models. I can go to an AI model and say, 'cause I work on Broadway, right? I say I need the, I want song.
I need it modeled after the best hits of the last 30 years on Broadway. I need a female who's gonna be soprano I need, and boom. That arrangement is so [00:11:30] good because it's modeled after, you know, 800 hits or 30 hits or 50 hits. Yeah. Like the best possible bell curve of, of versions of that film right there.
Theoretically, AI wrote a hit that I could not write 'cause I can't take in all of that. No one can write hit after hit. Will you then use actors or actresses that will show it because you Well, yes. Yes, that's exactly right. When I am using AI tools to create new music, it's not to put it on a record or it's not to put it on Spotify, although I can, [00:12:00] right.
It's actually to use it in a show with real actors, actresses, whatever, to then per, you know, perform it. Right. And so does the audience care how that got written? Partially written by ai because like I can feed it melodies and that's an easy one. I go, I got this great melody, I just don't wanna write the other 18 parts that go with it.
It's bass, it's guitar, it's synth, it's some horns. It's, and by the way, I know how to write those, but they take a long, I'm gonna take a [00:12:30] week. I can have AI do it in like literally a minute and it's better than I could have done. Yeah, better to do that. Multiple variants of that. Like one with a bit of slightly funky a bit.
I, one, I can, that kind of thing. I can change the base, change it. Then I can stem separate, so I can pull the base out and put in my own base part if I want, even though it wrote the rest of them. Then I can say, you know, I wanna change these horn parts right here. Swap that out. I want to put another course in here.
All of that's doable today, except all I'm doing is taking a week or two or three or fours [00:13:00] worth of work and getting it down to an hour. Is that good? Well, people are gonna use the tools. It doesn't matter if it's good. Well, it's the interesting thing, isn't it? Um, we used, um, an AI assist just in our day job recently.
Uh, and obviously everybody's doing this. I'm not, not suggesting this is particular genius or anything like that, but it was, it was very much in the same way. It was about, it was about removing toil from the process. Mm-hmm. So we still had, like, we set out a [00:13:30] lot of human shaping it. So we did a first draft of something where we'd, where we'd set up, um, a meeting, we'd worked out the structure of what we wanted the end product to be like.
We ran the meeting, you know, having brought our own kind of experience of the conversation and research of the conversation. Had a meeting. Obviously you had AI transcript, that meeting, and then off the transcript we had it create a series of variants of what an end product could look like. Mm-hmm. Chose one of those, then we iterated [00:14:00] on that and then we used AI at the end to sort of round it off and finish it off.
Now that ended up probably removing 10 person days. Mm-hmm. From the process, something like that. Just the creation of one product and, and the outcome was just as good. I, so I'll tell you something I do every day and I'm not afraid to tell the world this is, is, um, so I get, I dunno call it five press requests a day, give or take for commenting on either AI or cybersecurity or something like that.
Right. Something I'm involved in. And, um, [00:14:30] and so normally up until a couple of years ago, you get the five or six questions. Five times, there's 25 questions I've gotta try to answer in a day. Right, right. I didn't need to spend time thinking about what my opinion was of the topic. I know what that is, what took time.
'cause I'm not an English major. Yeah, yeah, yeah. Is constructing the perfect sentences that were quotable for the press? Yeah, because I'm not the press and I'm not an English major, as I said, now [00:15:00] what I do is I'll use GBT five now that it's out. Right. That's my favourite model. But Gemini works all, all of 'em work.
But I think it's the best I will put in. Here's what I feel about this subject in about two paragraphs, but I'm. Blah, blah, blah, blah, blah, blah, blah. Just bleeding from the mouth, right? And then I said, here's the questions for the press. Take my opinion and write very pithy, quotable responses that the press will want to quote.
And so then I read it and I go, this is my opinion. It would've taken me [00:15:30] hours to write those sentences, right? This is perfect. This is a, this. I'm so much more productive and I'm learning more because now I'm responding to all of this and I have to think about my opinion. So I'm critically thinking. But I'm not doing the mundane drudgery.
Right, right, right. One of the very, well one of the funky things that we spoke to o on the show was uh, just talk to a transcript for 30 minutes every day about a topic and keep ingesting those transcripts into ai. And then basically create a digital twin of your thoughts. And you can ask the AI [00:16:00] the idea of your thoughts like you are doing, which is you talk to it and then you distill it down.
But do it every day religiously for 30 minutes. Pick a topic, talk about it, and you can eventually create your mind. And then when you start querying it, it remembers things that you haven't considered or forgot. That's right. And so it can re-prompt you and it's actually a great way to sort of test your thinking.
'cause you might not have quite taken that angle because it was a Tuesday and you're a bit tired and all that sort of stuff. I did that. And it's public. You can go to my website, kevin er.com, go to the bottom, get a digital [00:16:30] version of me that has been trained on almost every, I've been on a few hundred podcasts, right?
It's been trained on almost every podcast I did. My book, my writing, all the medium articles. I trained it on all of my stuff over five years or so. And you're talking to a, a literal copy of me, a visual copy of me as well. It's pretty cool. Yeah. It's that thing though, when, when an organization's facing a massive exodus of knowledge, say, capturing that knowledge through these types of mechanisms is quite important.
'cause then you're left behind [00:17:00] with a thing that you can maybe talk to, uh, with a voice interface that might give you the answer that you needed, but you may not be able to get when they retired. Yes. It might be the answer you would've given yourself, by the way, a year ago, but you can't remember. Yeah, exactly.
I, I was gonna ask, I was gonna ask Kevin, how often do you talk to yourself on, on an evening? Do you just sit there with a glass of whiskey or whatever you preferred tip is and just talk to yourself for a couple of hours? You know what's really funny? I have really worked hard to not be that person.
[00:17:30] 'cause we all know people who do and including our spouses and stuff, and they're just talking to themselves and stuff's coming out. And, and my, some of my older relatives do that. And so in my mind I go. I'm fine if I'm talking in here, but if I start to mouth it, this isn't, the dog doesn't want to hear it, my wife doesn't want to hear it because it's not filtered.
Right. You start just, yeah, so I try not to, I do have the whiskey. I do a brown at the end of the night, but very nice. I try to just talk in here and the more I have, the better the talk [00:18:00] gets. It's great conversation. What is, what is your, what is your whiskey of choice? Well, there's a good question. There's a, there's a, there's actually, there's a, a local, uh, uh, black button brewery here in Rochester, New York.
That was my favorite, except they just went out of business after many years. Is a bour many years a bourbon or not? Yeah, that was a bourbon. Yeah, that was a bourbon. And then, um, and then I like a lot of the single malt scotches too. And if you ask me my favorite, I, I don't know, because I have about 10 of them across the counter in the bar.
Uh, and I, whatever I feel like that [00:18:30] night, I can't, after a while, I can't really remember the difference, but I'm happy to pour any of them. And they're fine because they're all, you know, 15, 18, 12, 15, 18 year olds. So, so do you go for that stuff, the pia ones or the sweeter ones? You know what? The smoky ones?
What do you, what, what's your style? Uh, smokier. Smokier, but my neighbor. Like a ER or a lag or something like that? Yes, yes. But my neighbor is from. Scotland and he brought some very Petey stuff. Alright. And, and we got rid [00:19:00] of it. I, I mean, not down the drain. I mean, we got rid of it by drinking it. So in fact, I found at first you go, whoa, what is this?
Yeah. And then after I fro extremely Petey. Yes, very Petey. And then you go, this is actually really good. It's really good. I, I think we started recording and we're on the podcast. Although we've just wandered off, haven't we? So we, we are going, I'm gonna bring us back to the just, we're, we're just gonna keep going, I thought, you know?
Yeah, that's fine too. We're off to a good start. We're off to a good start. I think. I, there's so much to talk about today. What do [00:19:30] you wanna talk about? So I think what we might do is just go back to AI as an assistant. Yeah. And I, and I believe Kevin, you've had a fair bit of involvement in. This subject.
Yes. Yes. Right. Going back to the days of Clippy General Magic. Yeah. Clippy. Oh, can we get a clippy that actually works? That's my, I mean, honestly, it's sitting right there. Magic clippy. It's sitting right there. So here's the interesting thing. Yeah. So, so the people who helped develop clippy [00:20:00] were a team from Stanford University called Reeves and nasa, and they wrote a book about this called The Media Equation.
And their whole theory was basically if you made it cute, people would interact with it in an anthropomorphic way. And that would be helpful to them and they'd have a good experience. Uh, by the way, the theory is right, the model is right. Somehow clippy just didn't work. It was too goofy and it was too, but if you think about it, it was there to help you use word and use it.
Can I also ask though, about anthrop? I can [00:20:30] never say the word. Yeah. Anthropomorphism. Yeah. iMorph. Really? That's the word. Robert. You missed the p You missed the p Dave. I can't say it. It's like I'm not, I'm not, I'm not intentionally missing it. Anthropomorphic. Yes. Yeah. So he's my, he's my site take on it.
Um. I'm not convinced it needs to have human characteristics and human traits. So, you know, if you look at R 2D two, so, so three po, plenty of human traits and characteristics. Mm-hmm. Mm-hmm. R two less so, but eq [00:21:00] but sort of equally warm and endearing. Yeah, endearing. Exactly. And goofy and funny and whatever it might be.
Or, or helpful. So do you think as this whole thing goes forward, we might just drop the need for them to try and take on human characters? No, I really don't. I don't Because when we develop, so I, let me finish that story. We developed after that we developed with Reve and as the very first AI assistant called Portico and Mary and my talk and became General Motors on star virtual advisor and eventually [00:21:30] got licensed to Apple for Siri and Amazon for Alexa and others.
Right. So I led that team. I I, so that's what I'm known as the follower of the AI assistant. And um, what we learned there is no matter what we did, people wanted to treat Mary. Like a human. They knew it wasn't, but they would quickly say, oh, Mary, I really love that. Or Thank you, or, okay, you don't have to say thank you.
It doesn't know. It doesn't know. And um, this is happening of course with chat GBT and all the other ones. People [00:22:00] say thank you and people yell at them. And, and uh, recently one of Sam Altman's people said, please stop yelling at chat GPT because it's about a quarter of a watt every time. It has to Yeah.
Every time deal with your yelling at it and it has to respond. Just stop yelling at the darn thing. Okay. You're doing no good for, for anything. And, um, and so people, in fact, as you talked about R 2D two and you said, well, it wasn't that human, but we found it helpful and funny and you've anthropomorphize the darn thing [00:22:30] already.
Maybe that's true. Yeah. We have no, maybe that's true because we have another, no other model, right? We grew up with other people around us, so then we see this thing that we're supposed to interact with, whether it's on the screen or off the screen. We interact with it in a very human way, but, but isn't it okay to have the interface on the other side?
And let's just keep going with R two as an analogy. You know, he doesn't look like a human, he doesn't speak like a human may. Maybe there is something in his beeps and words mm-hmm. That kind of suggests [00:23:00] language in a way that's mm-hmm. Anthropomorphized. That one with a p anthropomorphic, that's getting closer there.
You put the p in this time trying my best, we'll, we'll edit it in post and put the p in, just drop in someone else's voice saying the word correctly. Absolutely. Yeah. After formalized. And, and so I therefore, in terms of how I interpret him, chirping away to me, kind of interpret as a human as as we do with dogs and cats and goldfish and, you know, whatever it might be.[00:23:30]
But perhaps on the other side, doesn't need to be designed around that, because I'm gonna bring that to it anyway. Perhaps. Well, great question. Go ahead. I was gonna say, but, uh, there's that. That, uh, um, scene from Reservoir Dogs where they talk about personality and I think what R 2D two had was a cheeky personality, which we related back to human traits.
So I think the fact that he did like cheeky things and sort of disagreed with people and sort of made noises at the point where you'd naturally sigh. I think that's where we are relating it back to [00:24:00] human traits, isn't it? He had a cheeky personality. That's what I'm, no. Aren't we underestimating like the venting the emotions, like not yelling because, you know, due to technology, et cetera, you, you shouldn't be yelling or say thank you, but it's also like us expressing ourselves right.
It would be a waste if you don't do that. So, so, you know, the, the data show so far a, a, a across all of these things that we tend, even if they're not very human-like we tend to respond as if they are. 'cause it's kind of all we know. It's [00:24:30] just that's what we know. Like, we know we should be cordial. Even though you, you know, chat GPT is software, right?
It doesn't care about cordiality at all. It doesn't know. Um, and, and, and look, I'll give you one other point here 'cause you brought up robots. Is people ask me all the time, why are we developing humanoid robots? Why do they have to be humanoid at all? Hmm. One, because of the interaction we think will probably be better, but that's not the main reason.
The main reason is if I want a robot in the home. It has to work with my [00:25:00] refrigerator, my stove, my uh, washing machine and dryer. And those were designed for humans with two arms and two legs. Yeah. I can buy that. And, and they only work that way, right? Well, and, and, and so we have to do things that are about the size of the average human, or they don't work in the average home and we're not gonna redesign our homes for some other kind of robot with some kind of Yeah, we've, so, so we're gonna have humanoid robots.
We've spent a, yeah. Hundreds and hundred thousands of years shaping our environment to our characteristics. It [00:25:30] needs to fit in hundred percent. I often found why strange that people don't see that. So the world has been designed for us, for our. Shape. So yeah, we, if we want, it wasn't designed for bears or lions or frogs.
A frog can come into your home and never open the fridge, in fact, virtually nothing else can. Open bears have learned to open the fridge, but virtually nothing else can open a fridge except humans and bears. Right? Yeah. And, and, and bears that took them decades to figure it out that the food's inside that thing.
So, uh, no, we've designed our, [00:26:00] it's perfect. We've designed our world for humans. We're gonna have humanoid robots, and we're going to have them sooner than people think. Because of the way we're training them now, they can self-train with reinforcement learning that we really weren't doing as much of a few years ago.
So this is, uh, it's, it's, it's accelerating rapidly. So you started to tell us there a little bit about your, your early work leading teams around, you know, kind of those Yeah. Iconic but early versions of, of digital assists and. You know, obviously the, the background [00:26:30] technology has, has really matured. Yes.
Particularly in the last, let's say, and even in the last five years with the substantial leap on the, since we got LLMs and Transformers. Exactly. It's a game changer. We had a different type of, of technology, uh uh, then they will call hidden markoff models. They're still around by the way. They're still used.
It's a very good machine learning technique that doesn't use a lot of horsepower, doesn't need a GPU, it can run on A CPU. But we didn't have the models we had today, so we had to be very thoughtful about [00:27:00] CPU usage. Right. But still, she could say. Tens of thousands of things to you, which was enough to convince you that she had command of the English language and she could understand millions of things you'd say, which again, made you feel like she had a command of the English language.
She actually didn't, she only understood very specific things, but we played tricks. We put you into a domain, which by the way, is the way Siri worked until very recently. Mm-hmm. We would put you in a domain, so if we knew that you were talking to us [00:27:30] about email, there's a gag there about d does Siri work?
But I've just, I've just moved pa I'm just gonna move past. Yeah, yeah. Just that's, that's, that's a long, complicated thing because the people involved in Siri actually. It. So here's what actually happens. So you go, how did some of these assistance happen? Well, what happened is, um, we at General Manager were licensing our technology to other people.
One of them was about to be Netscape and there was this whole team at Netscape that came to a meeting and we, you know, [00:28:00] shared the technology and showed the patents and all of that. But then Netscape started to go down, so that entire team left to form two other companies. One of 'em being, tell me. Plus another one.
Those were also voice assistants. They were modeled completely after what we did at General Magic Patents and all. 'cause they could. And then, uh, some of that team went off to start the Siri company, which modeled exactly off of the Seminole patents. Again, this is what we do in Silicon Valley. Nothing wrong with that.
And finally, apple bought [00:28:30] Siri, the company. Right? And some of those people stayed to manage Siri all the way until about four months ago when Tim said, this isn't working. Well, I'm gonna replace some of the team, right, including the leader. So we know all those people. 'cause they all came out of, literally, as is typical in Silicon Valley, one meeting where there were 20 people in it.
We were sharing technology and we said, here's the patents. This is how you do this and these are the reasons why. And so everything was built off of that. But then when it got sold to, I'm gonna close the story here, [00:29:00] sold to Apple. Nathan ald, who was the former CTO of Microsoft, had purchased all the general magic patents.
Knowing that someday there'll be something other than startups in this space. And once Apple got in this space, he went a knocking and said, you know, I have all these patents and you know, you are infringing on all of them. So let's start with a few hundred million. The long and the short of it is Nathan made billions of dollars on the voice user interface.
Patents that we built, voice AI patents that we built at General Magic. [00:29:30] So, uh, I'm proud to, to have, have really made Nathan a billionaire. Is there a little bit of you that said maybe that could have been me and a bit for me? Please. Can I just have like sense? Well at least take me to dinner again. I know exactly.
A nice tomma Hawke steak or something. Yes, exactly one. And he likes that, by the way, one dinner was, by the way, he wrote this incredible series of cookbooks. That's the science of cooking. And he goes in the chemistry of why certain things work in a certain way. It's, it's kind of brilliant work. And I like Nathan a lot, so, no, you know, [00:30:00] no.
Here's the thing. You know, when you work for a company, you guys know this, right? When you work for a company. And you invent something, the company owns the invention. Oh yeah. Oh yeah. Like you don't get to take any of that. It doesn't work that way. So when General Magic sold those patents. I, you know, they paid me well, right?
Yeah. And I had stock in the company and I can't complain at all. But the billions goes to those who now hold the patents and extend them. And Nathan did that with intellectual ventures and good for him. Right. Everybody's gotta have a business. There is working life in a little [00:30:30] portrait. Beautifully said Kevin.
Exactly that. Um, what I was gonna ask you, you touched on GPT five mm-hmm. And I dunno whether you've been following the social commentary Yes. About what's going on around ai and I, I, I don't, don't mean this as a review of GPT5, um, um, I think more in terms of the market cooling slightly on the hysteria of ai.
I wonder if you had a take on that. I do. So, um, I think you can go back to some podcasts and articles from even [00:31:00] two years ago from me and I, and I was saying, which some people have said, I didn't invent it, but which is that. We're going to Asim tote on the transformer. Hmm. And so these LLMs are based on a transformer.
That model comes out of 2017 from Google, uh, for background for everyone. And that model was basically do language translation. And then someone said, boy, if you've learned all the phrases in, say, the English language or other language, then you could actually query and it, and it would, it would do more than translate.
It'll actually [00:31:30] answer back in full phrases. Right. Uh, by placing one word after another to rebuild the phrase. And that, that was the start of LLMs. Right. So, so, and that, and the breakthrough for Transformer actually came out of the breakthrough of deep learning in 2012, also from Google published the seminal paper on that.
So all of a sudden we've got deep learning. That becomes a transformer, which is a deep learning model with just a few neat little additions to it. Attention and reinforcement learning and such. And boom, this is what we've got now. We've been [00:32:00] learning with transformers since 2017. We've been out there scouring the web.
And what many people don't know is actually most of the model makers don't exactly scour the web. They go to a nonprofit called The Common Crawl, who has already crawled the entire web recently, and they have something like 28 petabytes available to train models. So you train off common, see, people don't know all the secrets, so you train off Well, they do, man, the common crawl, right?
You don't re crawl it, you don't have to, you might do some of your own crawl, like, I [00:32:30] really want more video, or I want, yeah, right. But generally someone's scoured the entire web. I'm gonna go learn from that. Right? And, and, and, and you pay them for that. But it's a nonprofit and, and it's supported as a nonprofit.
And, um, so that, and they're in Silicon Valley, I think they're in Mountain View. And every model maker, it goes the common for all. They train on a portion or all of it. At this point, you know, we've been in the big models over a trillion tokens. So call it a trillion words. Call it a trillion words, right?
Mm-hmm. That's a lot. So going from a [00:33:00] trillion to 2 trillion doesn't actually double the performance of the model, right? Yeah. Because if you think of it this way, at a trillion, you probably captured virtually in the entire bell curve of everything we were gonna say or do, or video. So 2 trillion doesn't expand that bell curve at all.
Barely, right? So, um, so there's other things they're doing. Look, the, the biggest improvement in the models in three years, uh, is really all the training from scale ai. And it's 10,000 people basically adding [00:33:30] human supervision to the answers right to the outcomes. So these are trained unsupervised, which means they don't know the difference between fact and fiction, right?
They're totally un unsupervised. Training means I don't know what the outcome is, and I don't know what the category is, but I don't care. I'm just gonna build relative stuff. Then later you hire scale AI with literally thousands and thousands of people that are looking at the inputs and the outcomes and saying, was that right?
Was it appropriate? Let me fix it. So if you remember early [00:34:00] on, these models would say, oh, I love you, or, let me out, I'm stuck in the computer and stuff like that. They, they just put rules and said, no, no, no, no, that's not your answer. So by today, a lot of what you're getting is not what the model was exactly going to say.
It was filtered by some filters that were put in by scale AI that are humans. Now at General Magic, we actually invented that concept because we could not get the models to always respond, right? So what we did is we had a slew of [00:34:30] linguists. I hired linguists that weren't technical, they were linguists, and we gave them a tool set that allowed them to fix the output.
So if you said. Gimme my email and it says, here's your voicemail. You go, no. Now let's fix that. When you hear, gimme my email, it means email. I am giving you an example. Right? Or if they swore at it, you'd say, what do you want it to come back with? Things like that. So they would sit there, seven of them all day, 20, you know, seven, eight hours a day, right?
Five days a week, and do nothing [00:35:00] but fix the output. We actually patented that concept. That is what scale AI is doing today. It's, it hasn't changed. Those concepts still need review by humans. So a lot of the betterment of these models has come from finally, hu, human Supervision, which is fine. Gotcha. So st taking a step forward then from, from kinda where we're at and I'll, I'll categorize it as slightly, the excitement curve is maybe flattening.
Yeah. And so is the, the, the, the, you [00:35:30] know, the improvement in the models is flattening, we're asso toting and, and and. Yes. And there's probably, and there'll be a relation between those two things, right? Yeah. So like where we're now from here, I guess, and, and there are two elements to that. I think the first is the continuing evolution of the virtual assistant.
Mm-hmm. I'm kind of interested in your thoughts on the mm-hmm. Altman Ive collaboration that was rumored, and I dunno whether that's all happening, but, you know, the, it's sort of the, the evolution of that, the, the interface want a better, a better [00:36:00] expression. And then in the background kind of, are we there, is that gonna be the performance of the platform or are there some, there are, are there some step changes you can see?
Well, the. We're reaching the end of the perfor. My, just my personal opinion, we're reaching the end of huge performance increases in transformers. Right? So when we saw, when the public finally saw GBT 3.5, and I think I had used two and then three and then 3.5, right? But when the public saw 3.5, it [00:36:30] was a jump from having nothing to this computer I could speak English to.
Yeah. It was phenomenal. It was phenomenal. Now those who've been working in this field had been speaking Python to this stuff for a long time. So it's like we've been doing this kind of human and, and those who, like me, who were in, um, virtual assistants very early in AI assistants and AI voice, we weren't surprised at all.
It was like, oh, that's cool, but we've been talking to models for decades, literally. Right. But [00:37:00] now the public. Could say anything and it would say something back. And so it's the first time most of the public got to interface with something like that and have a different interface than Google to ask questions to.
So again, we've had Google for 25 plus years, right? They're almost 30 years. And we do ask it questions and it would come back with a list of websites where the answers are, and now all of a sudden it would just give you the answer which Google has adopted with Gemini upfront and they're doing, you know, search ai and, and, and all of that.
So I [00:37:30] think we are asy toting on, on the models themselves, but not at all on what we can do by combining these models in applied ai. Yeah, so I think all of the win here isn't, oh, it's a better LLM. They're already way smarter than any of us here. I mean, there is, you all four of us could ask an LLM something esoteric and it's likely to come back with the right answer that we could not.
Have thought of or found what? So, so, so it's already really [00:38:00] good, is my point. Now we have to apply that like I'm doing, uh, you know, I'm doing this work at App Advance, we've talked about a little bit. That's very cool. And the application of five different Transformers to solve these problems is pretty mind blowing.
So there was a, there's a theory around when a tool is created, uh, and they're called GPTs, general Purpose Tools. Yeah. Like a thing, uh, um, it takes at least a decade for that tool to, um, embed itself in common day usage. So, [00:38:30] yeah. So you've got the, the model's got to where it's probably gonna get to without some, uh, you know, significant change in how we approach it.
Right. But actually humanity has a, another sort of eight years to go to work out how to actually use this stuff day to day and make it work effectively. That, that's right. Look, people are. Using, you know, chat for the things they're using it for. But, but now, like I said, when you so on hugging face, what are there three, four, 500,000, you know, models right now [00:39:00] using a variety of transformers, right?
Um, so, so think about the people doing that work in training models on very specific data sets. Very different than, let me train it on all of the English language, right? And these are valuable when you apply them to real problems. So, so, um, I'll use App Advance as an example. Not to, not, not to advertise the company or, but I wanna use it as an example because when we look around the world at the number of people in software quality assurance for enterprises [00:39:30] is two and a half or 3 million people are in those jobs.
And they're either manual testers or they are writing test scripts. They have been doing these jobs for three or four decades, right? There's been very little change. We use recorders or code or manual testers. That's what there's been. So, so what we said is how do we leverage a variety of machine learning and transformers to do exactly, to model exactly what the humans are doing?
'cause it's quite repetitive, right? I've got a [00:40:00] test case, I'm reading it and what am I doing? I am writing scripts, you know, so we spent years and a few million lines of code and five different models to get to a point where I can, from business requirements, I can generate test cases. From those test cases, I can generate the scripts, no human involvement, and from the scripts I can run them and tell you where all your bugs are.
I don't have a human in that loop. I actually don't need a human. Now if I'm one of the two and a half million humans that do that job. [00:40:30] I'm a little worried. Yeah. Uh, the whole tick speed of an organization. Agile came in two week sprint. Now it's a two day sprint. And you, you put AI in the main profile and a human's just doing the check and balance now.
That's right. So you've taken the human just to observe the loop, not even get in the loop. So don't get in the loop. Yeah. And it's that, which is like two days. Bang, I'm done. That's two weeks worth of work. That's right. And it's, it's validated and checked and I think we haven't. We haven't quite revved our minds up to the tick speed change that No.
In an organization that's about to arrive for this sort of software [00:41:00] development. That That's right. And, and, and I'll tell you, uh, what's interesting is, uh, we and others, uh, you know, I, I speak around the world on this and I talk to a lot of people doing this work and, and, and putting applied AI into enterprise, right?
Ca you, you might be shocked or might not be the number of people at the lower tier, and this could be customer support, it could be people in QA tests that, whatever, right? That when they see the, these technologies demonstrated to them, [00:41:30] that lower tier, I don't know what, you know, the workers right?
Actually sabotage. The POCs actually sabotage the tests of these things because they don't want them in, like, this is not what they want in their world. But you can't imagine that ever happening. A Robert Turkey don't vote for Christmas, David. No. Yes, no. Yes. Turkeys don't vote for Christmas. So, um, do you know the word sabotage?
You guys know where it came from? Hi, French word. Is it, did it come from the Beastie Boys song? Sabo Avatar? No. Brilliant song by the way. Great song. [00:42:00] No good yet. Uh, it, it, it happened, uh, in the Industrial Revolution. It happened in, uh, about 150 years ago in France when, um, uh, when, uh, automation was showing up in the factory floor, and people would throw, the workers would throw their shoes in the machines to clog them up.
Oh, the Luddites? Yeah. Yeah. The, the, yes, the Sabo and the Luddites, they followed. The, the Luddites actually followed a leader, Mr. Luddite. Right. You know, that was the whole thing. And so they brilliant. Were totally against automation taking [00:42:30] jobs. I am telling you, we see this daily in the applied AI industry where the actual workers, you know, they see a, a, a technology, like I'm describing, literally write a thousand test scripts in an hour, and it would've taken them a months to do that.
Right. Well, there's a, there's a societal tissue reject that goes on at the moment, but what that looks like is, well, you're cheating. You can't, you can't submit something that AI's done 'cause you're cheating. It means that you haven't done this thing. It's like. [00:43:00] No, that's literally how you use this thing.
I, I mean, it is to avoid me having to go through the toil of doing that, that it might as well be, it's cheating to use Excel, which by the way, when, when Excel first arrived, people thought it's cheating. Don't use that. But eventually, of course, it took over finance, right? It's cheating to use a car. You should walk.
What? Yeah. It's, it's, it's 500 miles. No, I can't walk. You know, you do see differences in, in different generations, right. Was, to be honest, if I, now it feels like everyone is [00:43:30] against it and they're, but I see a lot of youngsters that are like, duh, why are, why, you know, why are we not using ai? That's right.
The people just coming out of college, university. Are using ai, and they are an AI first generation. They went, they, you know, they entered university with GBT 3.5 just coming out or in their freshman year or something. And by the time they graduate this coming this year, right, um, they will have had AI their entire university career, [00:44:00] right?
Mm-hmm. And so they're coming into the workforce going, I use it for everything, right? I use it to pick my meals. I use it to decide how to drive, where I go to where my vacation is. I do all of my assignments with it, and then I use a checker to make sure that the checker can't catch the fact that I did it to do all my assignments.
Oh, yes. Um, so the fact of the matter is they're very good at using these tools and, and, um, and, and so it's the people who are already in the jobs that are going, I, I need to protect my [00:44:30] job. Right? And my answer to that is the people who threw their shoes in the machine. Didn't win in the end. In the end, the machines won.
They always win. Automation always wins. The PC on the desktop, the smartphone, the wheel, get over it. Well, just before we move off, ai, I just wanna return back to, uh, the, the earlier point that we were making, uh, part was the mod evolution. Yeah. And we, we talked a bit about where the model's up to, and then we talked [00:45:00] about the assistant, the front end, the interface.
So what do you see as the, you know, using Rob's model of sort of five to 10 years for the actual tool set to build in? What's our interface gonna look like over that period, do you think? Well, it, it, it, it's, it's, you know, it's very interesting. Some, some people predict that a lot of enterprise software is not going to look like it does today.
It's not gonna have pre-done forms and pre-done reports. It will literally be just a text interface [00:45:30] that says, give me data on the following format in the following way, and blah, blah, blah. Now here's. Why? I am gonna argue my own point there, because the English language, I'll pick on the English language isn't that prescriptive.
The reason we develop IBM developed coding languages in the fifties and sixties is not because we couldn't have understood English, it's because English wasn't a good descriptor for doing math like FORTRAN was, or dealing with [00:46:00] text the way COBOL was. So we developed programming languages to describe those very accurately.
Right? So if I had to describe some kind of algorithmic logic, which I can describe today in Python or see, or Java or JavaScript or whatever, I don't know that I could easily do it in English. It might take an entire page, you know, it might take, does that, which I could put in Java. In three lines, right?
So we have to be careful what we ask for, right? There's a reason that we've created [00:46:30] these management reports that look a certain way, and to describe that report in English might be almost impossible. And it might be easier to say, do you want the standard management report that someone else has designed properly to say yes?
'cause I can't even describe that in English. Do you think though, that the, that the impact this time around might go the other way, which is you have a series of AI native, you know, generation coming through university at the moment. Yeah. That are used to creating prompts that are using English, but in a very specific way that, that, that specificity [00:47:00] of English and adding a shorthand program like version of English might start to evolve.
It, it, it's, um, news. What's possible.Yes. Uh, you know, we, we invented Gier Link. I don't know if you know what Gier Link is. Gier Link allows LLMs to talk to each other at high speed. And when you hear it, it has to be audio, audio acoustic. That, that terrifies me, Kevin. Yeah. That terrifies me. It's acoustic.
It's acoustic so that they, if if an AI calls an AI on the phone to make a, um, a hotel reservation, it'll recognize the other AI and they'll [00:47:30] start talking and something. I've seen that, that sounds like a modem. It's like babies talking to each other. Yes, but it's really fast. They can speak to each other fast.
Amazing. And, um, and that's good actually. 'cause they can get on, they can be done in three seconds. They don't actually need to talk to each other, you know, for in English. And they, and we in, we invented that and gave it to. These models because we know the English language is not prescriptive, is not accurate, is often misunderstood.
We, I, I can come up with several sentences and you go. It [00:48:00] either means this or this, and these are 180 degrees apart. Right? It's not a good language for, for describing articulate. So if I showed you a, a management report, Capgemini, you all know what big management reports look like and they have colors and graphs and charts and you know, and if you literally had to describe that thing in English, you'd pull your hair out.
Yeah, that's true. I, I just want someone, someone to, to be honest with you, I, I could pull my hair out with those things anyway. Describe it. Right? And an expert designed it and, and so Yeah. And, and so I, I, and I'm not sure, right? I mean, I, I think there [00:48:30] will be a lot of things we can do in So already. Look, if you look at copilot in Excel, one of the problems in Excel is you go, I need a formula to.
You know, tabulate this column, but then add it to this one and do it. And, and you're going, this is really hard. I can't remember how to do it. Mm. And and so now you can just type it into copilot and say, well, what I really want is do, do, do, do, do, do, do. And if you can describe it well enough, it says, here it is.
Woo. You go. Great. I don't have to remember the syntax of those formulas or the visual basic. I [00:49:00] don't have to do any of that. Right. It'll do it for me. And of course we're seeing this in coding as well. I mean, if people can describe, start what they're doing, and then describe what the business logic is in some reasonable way, it will write the code, uh, for quite well Now, you know, from from Devin to perplexity to copilot, they all are quite good.
So, to come full circle, do we think the the future is interacting with. A functioning version of Clippy and Clippy talking to [00:49:30] other clippy? Or do you see the virtual assistant for want of a better term, evolving into something new? Yeah, well, I think the virtual assistant's going to do more and more and ultimately leave the computer and be in a humanoid robot.
And that humanoid robot will be what we interface with around in our house. And, and, and we'll talk about wearables as well. But, but, but when we have a robot around us, we're likely to just turn to it and say, Hey, uh, how does that recipe work? Or Can you make the following? Or Can you check Amazon for, or, Hey, [00:50:00] can you search on this part?
'cause I was wondering, it'll just do it. Right. So it's right there. I don't have to go to my computer. I don't have to pick up my phone, I don't have to do anything. It's always listening. Right. And of course, that's what Alexa wanted to be, but, but I think, uh, you know, I think we're gonna interact with our robots that way.
You would ask the question about. Altman and Ives and, and and what they're doing. No, I don't think they've settled on, um, exactly what the form factor is, but I do think this apple may already [00:50:30] have the perfect form factor. Because they have a watch and the watch has audio, technically has video. It actually has everything.
It can display things when you really need to see something. Right? And I think these people who said, well, the perfect thing is gonna be a pin, or it's gonna be an ear ring, or it's gonna be, I don't think so, because sometimes visual is required, right? Hey, take a look at this pair. Is it rotten? You know, whatever the.
Case is right. So I need visual. And the other thing, clearly glasses. I mean, people are doing these ray [00:51:00] bands and stuff that are, you know, they have camera and they, some of them have, uh, you know, kind of ar, vr, uh, capabilities. Um, clearly interesting. Will everybody wanna wear glasses and look through them and have stuff come on?
The glasses? I don't know that technology's been around since Google Glass. It's now 12 years old, right? This isn't new and it has never quite taken off in society. But maybe the, but the watch has, and the watch could technically, with a sir type a, a, a high end sir type [00:51:30] interface could already do what anything, what everyone wants to do, which is, I just want.
Chat GPT on my wrist. And I want to ask things to my dick, Tracy. Watch for those of us who remember that Dick Tracy and, and they've, and they've already got the watch and we already wear it. So maybe Apple already has everything we need and there's no, that is the device. It's just thinking.[00:52:00]
Yeah. There's this compelling insight that I wanted to talk to you about because, uh, nowadays we now have around 175 zetabytes of data that's like 170 50 million gigabytes. And to put that in perspective, we're collectively generating 140 billion WhatsApp messages a day. And more than 2.6 million videos being uploaded to YouTube daily.
Totally open. That's just, uh, what Rob sends when he is doom scrolling, I think. Yeah. Or stressed about travel. Are you there? Marcel? Have you booked the Airbnb? [00:52:30] Why aren't we there? Where's my ticket? Oh, I've lost my passport. Yeah. And we all still got, haven't got tickets for the conference. Have you got your tickets?
I've got my ticket that, that's a billion WhatsApp right there. Yes. So 500,000 hours of content every day in 24 hours. So I bring this up because I've already talked about this in a previous episode I shared with you that I spent some time in the Alps this summer, mostly offline. And it was a bliss, uh, without the avalanche of inputs and notifications and my nervous [00:53:00] system was actually really beginning to calm down.
And that starts a huge contrast between, uh, relentless digital noise and everything that we just talked about. I think AI and the speed of delivering AI and tech and real life quiet made me pause and wonder, uh, what do you think, Evan, with AI accelerating to fill in every void, where do you see these breathing spaces, that [00:53:30] biological and psychological pauses that we really need?
Because, you know, with such high rates in burnouts, are we actually helping ourselves here, or where do you see this going? That's, that's such a thoughtful question. I'm glad you ans asked it. You know, I am working on a book called The Joy Success Cycle and, um. It attempts to lay out a plan that says, regardless of what you have to do that day, you have to make every moment joyful.
Because it's up to our own brains to say, [00:54:00] for example, I'm going to love being on this podcast, or Oh, I've gotta be on this podcast, right? Mm-hmm. And we, and as humans, we tend to do the negative, oh, I gotta do this. And while I'm on here, there's emails piling up and there's, you know, this is what we do and we have to stop doing that anyway.
And we have to focus and we have to enjoy every single moment. And, um, some of what we do is we talk about mind breaks. And a mind break could be half an hour, could be 10 minutes, or it could be a week or two in the [00:54:30] Alps where you literally shut down everything. The challenge for society today is while you're there and decompressing and taking in wonderful nature, you subconsciously know that those 300 emails a day are still coming in.
And that when you get back, there'll be 4,000 emails waiting for you. And um, that's what we've done in society. That's what we've done. And somehow we've gotta find joy even in that, right? We've gotta find joy, uh, in [00:55:00] getting away and not worrying about that coming in. And then we've gotta come back and actually say, I'm gonna dedicate a day to do nothing but catch up and I'm gonna find joy in getting rid of 99% of these that don't, I don't have to respond to.
Right? But technology has been taking, uh, more and more and more of our life to some extent since the phone. And then we don't remember that, but the phone, the original phone, then the computer, once we had a computer at home, and many of us will remember the days before [00:55:30] we had a computer at home, and then we got a computer home with a modem and it would make noise.
And what would you do after dinner? You'd get back on, check some emails, do some things. All of a sudden we had email, internet kind of thing, computer. Then we got this thing. Alright, the smartphone and then it was twenty four seven. You wake up at three o'clock in the morning, you go, oh, look at these four things that came in.
You know, I can probably answer some of those. And I'm not saying you do that, but a lot of humanity is doing that. The, these phones are in the hands of [00:56:00] something like three or 4 billion people, right? This is crazy. Right? And now AI is here. And AI, if used right, is making all of us actually produce more content than we've ever produced in our life.
Ever. Like I talked earlier about replying to the reporter, I can reply out of four or five times more reporters in the same amount of time than I used to. I can write 52 blog posts in a day. Right? I've got my thoughts in there, but I'm still gonna [00:56:30] edit it. But I, I can do a year's worth of blog posts today.
Right? Right Now, what's that doing to the brain? Well, to some extent it is keeping your brain really pliable and really active, which is really good because when we look at. When we look back, a generation or two, when people retired, you know, they literally retired. Like the brain retired, they played golf, they watched tv, right?
Mm-hmm. And now, you know, I have been rethinking what does a retirement look like, right? Because [00:57:00] maybe I want to keep inventing or maybe I want to stay on some boards, or maybe I still want to help companies, whatever it is, because my brain is so active and, and, and, and, you know, producing more content than I, than I could in the past.
So I think it remains to be seen, but I would say in the joy, success cycle way, you have to find joy in everything. And that includes some breaks, it includes some vacations, it includes some detox from these things so that your brain can rest. And lastly, [00:57:30] people ask me, how did you get to 90, I have 95 worldwide patents, right?
A lot of 'em in ai. How did you get to that? I actually took the time to get close everything and think about a problem set. Try to then come up with a solution for it. Yeah. Because your mind is not open. If you're, if you're answering emails, you can't be solving a major problem over here. It's impossible.
Right. You have to stop and get very strategic and you could do research, but you have to focus on that. Yeah. I, I couldn't agree more. [00:58:00] To me, it feels like a, it's like a discipline question almost. Mm-hmm. Where, um, you, you know, we've all, we all have those colleagues or friends that say, oh, my diary, I'm on team calls all day every day.
And I'm like, well, who controls your diary? Does your diary control you or do you control your diary? It's like uhhuh. You can, you can decide to have lunch break so you can decide to put half an hour a day aside, or an hour a day aside to, as you said, like even rest your brain or give [00:58:30] your brain some space to think about something different.
But it, it seems to me, one. That is an act of discipline because there's always something, you know, there's always a, a problem or an urgent thing or an operational thing. There's always gonna be, if they're always going to be there, they, they're queued up in an infinite queue, so you just, you're just gonna strike 'em off and strike 'em off and strike 'em off, and it's never gonna end.
You seem keen to come in, Robert. Go for it. I was gonna say this was answered in 1996 via Hollywood with the great film [00:59:00] Multiplicity and the actor Michael Keaton. Do you remember where his life's too busy and he clones himself, right? Yeah. To deal with all those little things you just talked about, Dave, which is the things that are easy to deal with, but there's so many of them, and of course there's a caper and it goes given.
It's an infinite cue though. Does that mean you just get an infinite number of Michael Keaton? Well, so when we talk about creating digital twins of ourselves mm-hmm. That's right. In a couple years you could create a digital twin of you and give it the mundane stuff, the mundane stuff, and then just turf it over an agentic Dave, and then you can have as many agentic Daves as you like.
That is the [00:59:30] point of agents, we're going to still get, we're gonna get more done, but we're gonna farm that off to agents that we, that we, that we learned to trust. Which by the way, executives have had and, and kept secretaries that did exactly that. Literally looked at every email, every calendar entry.
Yeah. I'll give you an example though, is right on to this, which was I, I have usually. He had a CEO coach. And, um, and that's because, uh, I want to get better. I always wanna be a better manager, blah, blah, blah. And, and, [01:00:00] and I'm also curious about what other people are doing, right? So, you know, the best golfers have coaches, not because the coach is a better golfer, but they see things that you can't see.
Yeah. So, yeah, one time my coach said, turn around your computer. I wanna see what's on your calendar. Okay. And the calendar's full, of course, like the entire day, and the next day and the next day. And he says, you're a complete failure. So, well, that's a, should I keep paying you? I, I don't know. And he says, here's the reason why you're actually hired.[01:00:30]
Because of your experience and your strategic thinking. Right? And you've left yourself zero time to strategically think. He says, I want you to only fill 20% of your day with meetings. And the other 80% is for strategic thinking, based on what you've learned in those meetings and what else has come over the transom, et cetera, et cetera.
And if you don't have a blank calendar, 80% of the time that people, and maybe you block it so that people can't take it, you are a failure as a leader, or you haven't hired the right people, right? You've done something wrong. [01:01:00] I thought that was great advice. I, I, Kevin, I had a, I went on a, I was looking enough to go on a, a, like one of the senior leadership type away, like you go away for a week and yeah, you have pe you know, people come in and tell you cool stuff and.
And, and it was, I had exactly the same interaction where, um, you know, somebody was doing like an inspirational talk and, and they got to a point where they, they basically said, you are at a point in your career now, or you, you, you know, turning to leadership, you have to realize that 80% of the [01:01:30] things you do on a day-to-day basis are a waste of time.
So make a decision to stop doing them. Mm-hmm. Yep. You have to, you can't, you can't do everyone else's job or else you don't need them, and you're not a good leader and you can't be strategic. That's it. That's it. But it's not only on that level, that intentional downtime as we just mentioned, I think it's for every level.
Um, oh yeah, for sure. We can actually, you know, can you make sure that in the organizations you can have that pace altogether? And I think that's also something we should ask leadership. Do you see something there that [01:02:00] organizations are actually aware of the speed in everything that we do, and that they just go for intentional downtime, even on team level and individual level?
I, uh, I've, I've gotten to work a lot with, uh, Goldie. Ha I think you know, the actress, right? Yeah. And, um, Goldie has her foundation, which supports a program called MindUP. And MindUP HA has been very successful. It's been rolled out to, I don't know, 8 million students or something like that, uh, over the last, uh, 20 years.[01:02:30]
And MindUP is about taking scheduled mind breaks during the school day. Literally a mind break. Everything stops. You stop learning, everything goes quiet. And here's how you focus your mind during that five or six or seven minute mind break. And it turns out that these students have better test scores, less anger at the other students.
They get in less fights, like all the data are now there, right? Over 20 years. And all they're doing is a few [01:03:00] five minute breaks during the day, two or three or four small, but it stops. It, it get, as she says, it resets the mind to now be ready to accept the rest of your math class or the rest of your language class or whatever it is.
And so the mind break in children. Works, it happens to work in adults too. And even that five minute stop, and again, there's a focus part of this, you have to go read what it is that focuses your mind in a place far, far away from what you were doing. Right. And then when [01:03:30] you come back to it, you're coming back refreshed even in just five minutes.
It's amazing what it does. I I need to insert them into the middle of infuriating meetings. Yeah. You just have a button, you go mind break, mind break, and then it's the, the virtual, the window comes down and the shutters then come back up and everybody's, that is not a bad idea. You know, when the, you know, you get into these heated meetings and you go, oh, how am I gonna unravel this mess?
Right. Everybody's hating each other right now. And look, they're all bringing their own experiences. I, I, I [01:04:00] get that. But the mind break is one of 'em. And, um, if everybody could stop, we're not gonna talk about this anymore. Here's how you do your mind break. Take your five minute mind break and then we're gonna come back to this topic.
People come back with a more open mind. Ah, because, because they got closed. Because the frontal lobe got disconnected from the back. Right? Yeah. And the frontal lobe is what is keeping us from being hunters, gatherers, and killers. So, so we wanna stay connected to that part, right? Otherwise you reach across the table and kill the guy on the other [01:04:30] side of the table and it, and it's probably not worth it.
'cause you're arguing about, you know, can you imagine the hr, can you imagine the hair? That's a big cue. Yeah. Yeah. They need a gen HR to deal with all the crap. Yeah. That's a cue. That's a cue for HR that something's gone awry. Anyway, I, the mind break is a real thing as me, and I thought you'd appreciate that.
I love the idea of an HR KPI that is how many people were murdered in meetings? You go, we're, we're gonna keep that one below 70 this year. Yeah, yeah. Keep it below 70 [01:05:00] this year, but not to zero, because then not enough's getting done. So we want a few There's a healthy, there's a healthy, yeah, there's a healthy number.
We have to, yeah. That's terrible. Terrible. You don't have to cut it from the episode, but we're just joking people. Okay. It's a totally. Disclaimer. Yes, I do believe, I think that's what we always talk about in the end. It's also about culture, right? Sure. Because we're talking about discipline and that's very individual and you, you keep having these messages and then you are the one that actually has to decide, no, I'm not gonna [01:05:30] respond.
But in the end, it's also about the culture in your organization that you're allowed and that there's space for you to, no question, go into those seven minutes of, uh, of quiet, you know, it's the, it's the Brene Brown thing. And, uh, yeah, again, I've gotten to work with Brene and you know, part of what, what Brene, uh, talks about is this culture of, uh, of the ability, you know, to, to, um.
To speak out without feeling like you're going to get fired. I'm summarizing a big book. Right? Many books. But, but, um, and if you want [01:06:00] real disruptive innovation, that is what you have to do from the top. So anyone can say virtually anything. I mean, you can't harm someone. Right. But you can say, I've got an idea, and I know this is way outside, but I think we should make tables.
What do you mean we should make tables? You mean tables on? No, no, no. Literally we're gonna bring in saws and make tables because we'll be really good at it. Okay. Well, help me understand why you think that way. You know, don't throw it down. Don't, I'm being facetious in this case. Right. But, but don't pin them down and don't tell 'em that it's a dumb idea.
Because the reason [01:06:30] people don't come up with the best ideas is it's too risky. They're worried that their boss and others are gonna judge them. Right, and they feel very vulnerable. And vulnerability is a, it doesn't help you come out with the best ideas. You don't want your people to feel vulnerable at all.
You want them to feel, I can say anything and it's going to be examined and thought about. And it's, it's a great, it may not be done, but we're gonna have an open conversation. Well look on that note, I think very positive note of humanity at the end there. [01:07:00] Um, let's draw our conversation today to, uh, a little bit of a close.
And Kevin, before we move on, a big thank you for taking the time out today. What a pleasure it's been to talk to you. Well, thank you. Thank, Hey, we're having fun. And you know what the best podcast, I think for the listeners as well, they can tell everyone's having fun. We're having fun, we're getting real data out there.
We're sharing our experiences, sharing what's going on in AI and applied AI and all that. But we're also having fun. Yeah, that's what I think too, and I hope we've managed to do that today. We end every episode of this [01:07:30] podcast by asking our guest, what are you excited about doing next? Now, this could be something in your personal life, like you're doing something exciting on Broadway, upcoming, or it could be something in your professional life or a bit of both.
So Kevin, what are you excited about doing next? Uh, one, I'd like to finish my book, the Joy Success Cycle, but because I've written this book, the Joy Success Cycle, and I understand more about what drives joy in my life and how I drive it, um, I am so excited about so many things equally, and that's what it should be like.
I have [01:08:00] so many things that I go, brings me joy. So I'm doing a concert in Rochester, New York with the great Nicole Henry and a 27 piece orchestra, and it's all new arrangements. I'm the conductor music director, so I'm very excited about that. Only seats 550 people, I'm sure we're gonna sell out. I am very excited about some of the stuff on Broadway and film that I'm doing.
I'm really excited about what's happening with AI in music. And I know that a lot of musicians hate what's happening with AI in music. I just think the tools are here. It's democratizing access to ideas and it allows non-musicians to [01:08:30] come up with what was in their head, but they couldn't have done otherwise.
'cause they don't read music. Right. And now they can actually create, um, just, just like podcasting has democratized, you know, the, the radio personality, right? We can all do this now. So I'm excited about so much and in the end, I'm excited about where AI is taking us. It's, it's the most exciting time. I, my view, the most exciting time to be alive.
Humanity is moving so quickly and I know it's scaring some people. [01:09:00] But it's incredibly exciting. Progress can be scary, but one of the things, what, my big question is how do you fit all of this in like back in the nineties, multiplicity Mate actually did, he's cloned himself. It's the film. Film. Yeah, it's the film.
You actually, I think, did you actually develop a series of like fully, you know, kind of self-aware assistance like way back in the nineties, but never declared it? Yeah, yeah, yeah. Now you've got them all deployed. That's how do you know this is the real me? Exactly. I think that's the question we have to, that's the question.[01:09:30]
I have a virtual version of me that can be on here and would interact with you, so you don't know it's the real me, but in this case, I can reach back here and. Touch that. So probably I'm the real me. Look, I have gotten good at multitasking when it's appropriate to do so, and also good at focusing when it's appropriate to do so and closing everything and just focusing on something that takes focus.
Right? And I think if you can do both of those, you can, you know, I'm essentially running two companies right now, a cybersecurity company, an AI [01:10:00] company. I do 40, 50 keynotes a year, and they're all customized around the world. And I've got my Broadway and film work, right? And I have to do all of those. I'm, I, might be in the other room writing some music, then I might be back here on a podcast, then I'm working on a patent.
Then I, and so, um, uh, for me, that's exciting. For some people it would, you know, it wouldn't work. I, get that. Terrifying might be the word for some, I would say terrifying. So look. It's a little poly Mathis. But I, would, I would say this, it has kept [01:10:30] me curious because I get to continually learn in multiple fields.
And this has been, uh, it's so exciting to do that to me, right? I'm curious and I'm learning every time. And these days I'm learning new tools 'cause there's new tools every week or two. And I go, wow, I have to learn something new. How does this help me? How does it, doesn't it help me? Can I get rid of it? Um, it's, you know, it's just fascinating. So I, love multitasking in that way and I've bega become a fast multitasker. Uh, and lastly, it's very, it is very [01:11:00] joyful for me to look at the end of the day and go, oh, I did, you know, 58 things today and a podcast. So I'm feeling good. Well, it's been wonderful talking to you. Thanks again for fitting us in. Great to talk to you. Thanks for having me.
Well, I'm actually now wondering how the music would sound of Marcel’s brain. You know that what? Well, Marcel hasn't got a brain. It's has silence instead of people, you know, if I only had a brain, what I do [01:11:30] know is that we're on LinkedIn and we'd love to hear from you. So feel free to connect in DM if you have any questions for the show to tackle. And of course, please rate and subscribe to our podcast. It really helps us improve the show. A huge thanks to Kevin, our sound and editing wizard, Ben and Louis, our producer, Marcel, and of course to all our listeners.
See you in another reality next [01:12:00] week.