Exploring the practical and exciting alternate realities that can be unleashed through cloud driven transformation and cloud native living and working.
Each episode, our hosts Dave, Esmee & Rob talk to Cloud leaders and practitioners to understand how previously untapped business value can be released, how to deal with the challenges and risks that come with bold ventures and how does human experience factor into all of this?
They cover Intelligent Industry, Customer Experience, Sustainability, AI, Data and Insight, Cyber, Cost, Leadership, Talent and, of course, Tech.
Together, Dave, Esmee & Rob have over 80 years of cloud and transformation experience and act as our guides though a new reality each week.
Web - https://www.capgemini.com/insights/research-library/cloud-realities-podcast/
Email - Podcasts.cor@capgemini.com
CR95 Anders Indset: The real world - where are we with AGI and Quantum?
[00:00:00] I'm in Northern Berlin at the moment, and, uh, he's just taking a break from then. He is going back to the club in a couple of minutes, probably. You know, they got all that. He has to come out of the dark room for this. He should, he should be talking about his conscious right now.
Welcome to Cloud Realities, an original podcast from Capgemini, and this week a conversation show about the gap between humanity and ai. I'm Dave Chapman, I'm Esmee van de Giessen and I'm Rob Kernahan , and I am delighted to say that joining us for today's show, and let's call it an Easter special, shall we, should you call it an Easter special? Easter special? Are, are there any eggs involved? Dave? I'm only happy about it. I, you know, the whole theme of Easter. Take all that pushy, I'm just happy in the chocolate.
Yeah. Do you do an egg hunt, Rob, or do you deliver? Oh, I don't have that much energy, Dave. Easy. [00:01:00] I might watch others do it, but you know, I think the days of past I get excited about an egg hunt. I just like it to be, to me, did say excited. Oh, excited. I missed out there, didn't I? As, do you like any Easter egg hunt or do you like your chocolate delivered to you? Like No, I just like, yeah, the, the results. The chocolate. Lots of chocolate. Yeah.
I'm old enough and mature enough to know the result is the experience I'm actually seeking.
Just skip to the end. Yeah, skip to the end part. That's the bit I'm really interested in.
Well look bottom line up front first. That's what I'm asking to talk about elevated topics like this. I'm really, really glad to say that we have Anders Indset with us today. He's returning to the show. Very delighted about it.
One of our most talked about episodes was when Anders previously joined us to talk about, uh, the simulation hypothesis. I'm delighted to say he is joined us again today. Anders is a, is a philosopher and author of books like xm, the God Experiment, and most recently, and I think just about to come out the Singularity Paradox, the Gap between Humanity and ai.
[00:02:00] Anders, lovely to see you. How are you today?
Thank you so much. Great to be with you. Wonderful to be back on your show.
Now, do you enjoy the egg hunt or just the pure consumption of the egg?
Well have the cake and eat it too. We have to figure out how to, to kind of sort of have all eggs up all the time. Right. No, it's a wonderful experience. The kids love it. Uh, I've spent many years, years digging those Easter eggs down into the snow of Norway and hiding beautiful up at five o'clock in the morning at the cabin. So I love it. Great experience. Beautiful.
I mean, you lost me there at five o'clock in the morning, outside in the dark, digging holes in the snow and bury anything that involves, actually, you lost me at that.
Filled with that, based on that ritual, you have to get up before them. You have to figure out how to get out. You have to hide it. You have to make all the maps of where to find it and when they get up and if they wake up early and excited, you have to be up earlier. So it's, uh, it was, uh, was a big part of, of, of my.
My, my dad, uh, chores to do that and, uh, ah, yeah. Has always been a [00:03:00] special place in our culture, in my family. So that always been a big part with huge Easter eggs dicked into some kind of, uh, snow holes or whatever, so that, yeah.
Did you ever get, uh, uh, Easter eggs that were never found and then the snow melts and then there's these really, these really depressed, that looking eggs just sat on the road or something?
I wouldn't, I wouldn't have survived that, that part of the week at the cabin if the, if my daughters wouldn't have, uh, found the eggs and, uh, and, and literally they, they carry a lot of chocolate, so I was interested in finding em myself. Right. So, yeah. Yeah, yeah. Well, who doesn't like chocolate anyway? So Robert, what's confusing you this week?
So. I was reviewing some of our episodes, putting a security playlist together, Dave, on Spotify. Very nice. You can check it out. And I was re-listed to a few of the parts, you know the AI episodes we did where the AI led the show for a bit? Yeah, yeah. It played a clip and it was about hackers starting to create saleable products for you to be able to go and hack.
So they were creating a business model behind hacking [00:04:00] and they had calculations about who tack and all that sort of stuff. They got me into there. They're building assets and products basically. So they're getting into that commercialized world of, uh, hacking. And I wondered do they actually have to tackle legacy?
Right? Mm-hmm. And do they have to maintain this now so that they can keep to sell it or do they just throw it away and start again and how are they actually operating? And are all the fallacies that hurt you when you have to deal with it affecting them? Or is it something different? 'cause it came from chaos and like people on the keyboard banging away and over the last 10, 15 years we've seen it commercialized.
Yeah. And I'm wondering if all the things that come with commercialization have kicked in or not so broadly. I was confused about it. So how actually having meetings about, oh no, we need to patch, upgrade that and put that new version in there and all this sort of stuff. I mean, very mundane calls about this type of thing.
I mean, I'm gonna say, I'm gonna play the yes card on that because if you're, if you're in a situation where you are, you know, creating a product irrespective of whether that product's dark web, that dark web [00:05:00] product or, or you know, normal internet, then you've got co you know, cost of development. Yeah, yeah.
You've got, you know, cost of release, cost of maintenance, all of that sort of stuff. So you, you might be less concerned about. Say, you know, operational stability in a traditional sense, but you might be, you might be concerned about like, you know, efficacy of the actual, you know, virus working or, you know, whatever it might be.
Yeah. So you want to make sure that the customer satisfaction that you were able to hack most effectively with my product. I'm the best on the dark web, so you wanna buy from me. You're actually doing, I wonder if you do hu um, user research around it and go out and get feedback from the users and you have a product backlog that develops things.
I dunno, but it just thought to me, certainly do, they've, they've created a commercial model around it so it all, the reality of having that type of world must have kicked in. And I wonder if that's a very frustrating thing. 'cause they're probably not in it for that, but, uh, yeah, who knows?
Well on that note, on that [00:06:00] ez, what do you think about that?
Yeah, I'm a bit puzzled. I think it's the same, you see a lot of people and generations go back to like phones without, you know, the capability of going to internet. So maybe just the fun of hacking it will. Maybe it's a niche market to be honest, but there, there still will be a group I think that, you know, will continue to use this and do this.
But isn't this a temporary, uh, problem though? I mean, I think this is when you opt to put a, uh, algorithms to optimize, they will do this. And, uh, if you put add, uh, you know, I'm looking at post quantum safe crypto cryptography. Uh, the only path forward is to figure out how to tackle these, uh, challenges by applying mechanisms where human have incentives not to do it.
I mean, it was a very benain very, very, very simple thought around this. If the developers of technologies would become responsible for their products in terms of an insurance, you would have insane insurance policies. And therefore their interests of them [00:07:00] would be to, to tackle these challenges. And on the other hand, you know, uh, if the incentive is for, um, people to have in insurances to, um, to tackle this, you get kind of the dynamism of, um, creating products that are built into some kind of regulation.
Because the scenario, Rob, that you talk about is it's, it's. It's a, it's a problem right now, but once the AI and, and, and that enhances, it's just, uh, something that we will not even figure out. So we need to figure out some kind of control mechanism to avoid this. Is it on chain premises? Um, validation on point?
Is it some kind of quantum state of, of security? Uh, is it some kind of regulation part? We have to figure this out because I think your confusion, uh, if you think through it, I'm just playing with it. It's just, I. It's gonna skyrocket into some kind of very, very powerful, uh, force.
Yeah.
Anders, can you come back every week and help us out with Yeah, it'd be nice.
That was quite useful. Someone actually asked, answered the question [00:08:00] or, or maybe just run therapy sessions so that I can come with the confusions and you helped me walk through it. I go you solved it again. You a new podcast. You know that one.
Alright then let's get on with the show for today for the benefit of the listeners that kind of haven't engaged with your work, particularly X Macina and the Singularity Paradox. Just maybe just set it out a little bit for us and, and, and if there's a through line between, between the books, you know, clue us in a bit.
Yeah, sure. Dave. Um. I mean, my background obviously comes from, from, from, from business. And, uh, my fascination from technology has been the driving force forever. I see the economy kind of, sort of as the operating system of society. So when I started to play with this, I wrote a book back in 2019 and I'm, I'm in the midst of rewriting this book now titled The Quantum Economy.
Hmm.
Uh, and that was exactly that topic of playing with quantum and the economy, you know, what do you mean? Uh, and um, I saw that massive [00:09:00] transition from. The old economy and the new economy were both, were paradigms where we tamed technology to our advantage. And then we started to hand over the authorities to algorithms and, uh, the merchants of ai.
And, and now the transition to, to quantum basically AI on steroids is basically letting go of that authority and it's a completely new way of looking at how to build our operating system. Um, so I was very interested in that and it wasn't until I challenged my quantum. Physicist colleagues at one of these, um mm-hmm.
Quintessential bars in Vienna. Mm. A night out, uh, 12 of them. Some hanging on to the Copenhagen interpretation and, uh, having the observer and others, um, leaning more towards the multi world interpretation. So you have 12 employees of one company working together to build stuff, but have fundamentally different understanding of what quantum physics is [00:10:00] telling us.
So it's a wonderful experience to play with.
Although the idea of. Some people, 12 people in the same company with completely different ideas. Sounds like the team on any modern digital transformation program, doesn't it really? I mean, let's be honest with that. It's the difference of views that, that don't wanna go
there because it all starts with the term digital transformation.
You know, what the hell are we transforming into common denominators to, to what this transformation is about, you know? Yeah. It's, but it's good for making money. But, but let's get back to that. But, uh, yeah, but get back to that. Um, and yeah, so that the nice, so I challenged them and I said, you know, can you prove to me that we are not living in a simulation.
Which seems like a, you know, a, a task that could be achieved through quantum physics or math. And then Florian Noyer, my now co-author of these two latest books and some papers mm-hmm. Uh, calls me up after three, four months and says, Andrus, I haven't slept in, in, in four months. You know, it's, it's just, I can't [00:11:00] figure this out.
So again, let's play with it. So we published a paper, um, are we living in a, in a simulation where we look at the simulation hypothesis of Nick Bostrom and, uh, we, we, we ta take on that and that's the ex mock in our book. And uh, that is basically where we look at also how this transitions into the, um, I would say a theological creationism of mankind, the creator into a humane creationism where we are basically building a super.
Digital entity, uh, call it a God where we want to have bliss, immortality, and divinity out of the machine, and we are handing that over. And, and how is that journey? That's what I
feel like every day when I'm using PowerPoint Anders.
Yeah. Well that's, um, Dave, Dave, you've used a few words there that are incorrect.
Uh, you can't use PowerPoint. You have to get others to use it for you. Let's be honest about, that's my preferred route, Rob.
I'm not gonna lie, how many times I called going. How do you make that happen in [00:12:00] PowerPoint? How do you make it bold again? And it, and it's good with the name power though, up in power, you feel like you've
got something there up.It signifies something, I think, signifies something, I think. Anyway, anyway, back to ex transition to Yeah. Text now and we can dig into the book, but um, yeah, your question was more along the lines of the, is there a a, a kind of sort of evolution of the thinking here and, and they're related? Uh, very much so.
Uh, because at the end of the day it's, it's about, you know, our perception of reality and consciousness and what are we and what are we, what we up to and, and what will remain of us, so to speak. So we, um, we published that book because we saw that the paper wasn't enough. We liked it and we played with it.
So we made it into a book. And while doing, so we were writing on the now, um. New book, the single Layer to Paradox, [12:51] bridging the gap between humanity and ai, where we look at, you know, we are trying to build these machines, robots, ais as a machine, as an [00:13:00] external entity. Uh, and we think that this might have some very, very, even I would say existential risks [13:09] and however you want to define AGI and, and, and, and the, the, the depth of these technologies.
Uh, we, we don't see that as a very smart decision because we don't get the consequences and we see that today we are have almost every week there is a breakthrough in new ways of looking at it and, and, and progressing, but we don't really get what's going on at the foundational level. So we said that.
Okay. Maybe, uh, the plausible path then is not to merge or to, uh, create it externally, but to take evolution into our own hands and hack biology and chemistry and create what we have, um, claimed artificial human intelligence where the entrance point or the starting points better is the mensch, the human being, and we build from [00:14:00] there.
So, I mean, I'm not gonna romanticize, but it's, it's, it's very, I guess it's a good feeling to say that there is something that makes us human that is beyond software and code and programming in a machinery.
Yeah. And if that is. Something that we want to keep, we should keep that from the base criteria, the foundational level, and then we can evolve from that.
So, uh, our argument is basically saying that we will figure out, you know, the chemical scum as Stephen Hawke called it, you know, our body. Uh, and, um, we will hack chemistry and biology and, and from there on, if you wanna remain, um, menh human being, uh, something that is a entity that has a self perception and has the potentiality to ask a question that no one has answered, um, quote unquote, there is potential for progress and someone to experience that progress.
I think that's a wonderful world to be in. And therefore we, uh, [00:15:00] started this journey of the Singularity Paradox and that was the, the book that we now, uh, are bringing up.
So it sounds like, and, and without spoilers for the book, perhaps, that in the argument that kind of raging at the moment around ai, you have.
A co. You, you have many different camps, but let's just take two different camps. 'cause I think the other ends of the spectrum, you have one camp that's, you know, kind of perhaps seeing AI as this existential element or that it's, you know, kind of, we don't understand it and there are some dangers and difficulty in, in what AI might become.
And then it might become this, you know, kind of, uh, general intelligence. And then you've got some people at the other end who are just like, well, it's just like RPA, isn't it? You know, it's just, it's like new, new robots in factories. But for white collar. What do you make of that dialogue that's going on in society at the moment?
Are we over amping What's actually possible with AI today? And maybe overreacting to it because we've taken on [00:16:00] a lot of sci-fi ideas over the years. Or actually, is there really something in it, do you think?
Yeah, I think the latter. And I have a, um, I mean, I have an issue obviously with the term general intelligence.
I mean, there is a term that defend, there are multiple ways to define it. And all of these leading thinkers have various ways to look at it. You know, I mean, um, I think Daniel Hassabis of, of, uh, Google DeepMind, he, he had this, um, you know, this, I think this human view of it where you put all the humans into a data center and you have this geniuses come together and work, right?
He was like, that's kind of sort of the human cognitive capabilities that they are, you know, of the brightest minds put into one data center. And if you look at that, I mean, the recent updates of Gemini and, and grok and, and, and open ai, um, and, and looking at what probably going on in China underneath the, the surface, [00:17:00] I.
I think you know the speed of change. If you look at how we went from a GPT-3 in 2020 to GPT-4 in 2023, like okay, we were talking years now we're basically talking weeks.
You can see that in the modes. So you can now, when you'd asked it to generate an image on the early dars, they weren't very good, couldn't get text.
Right now with the new Gemini models, it'll do text perfectly. You can type it all in and then you can stand back and ask it to edit the image. So say, no, I want that in that person's hand and this sort of stuff. So the capability and the 'cause, in the early days you'd get a different image every time you iterate it as well.
Whereas now you can keep saying one. So there's this sophistication. Yeah. That is quite dramatic. And it's only six months.
Yeah. And you build and you go the, the way you've had to engineer that behind the scenes to make that work must have been incredible. If you think about the FE that's gone on in such a short.
Uh, space of time.
Yeah. And, and therefore, I think, you know, I, I made a very provocative statement. Some, some years ago, five years ago. I said that like the noble lectures of the future will come out of [00:18:00] corporate labs, not of out of academia. Uh, and it offend a lot of people, I think, but it, it was meant to do, it was just a basic, um, reasoning of, uh, how things evolve.
And you see that the 2024 Noble Prize in chemistry and physics, both were, were awarded to engineers and, and a tight, tight link to Google. So, um, I think we're heading there. Um, and, and why do we have this progress? Because the brightest mind minds are not applying for some research project through a governmental project or long-term bureaucratic, uh, structures.
They're in the co corporate, uh, environments with almost infinite access to money and brain power with the smartest, brightest minds, uh, just door to door. And therefore you have an insane speed of things because if it doesn't work out, you need to, you don't need to have to publish something and get credibility.
You are working on. You're prob you're basically doing surgery on an open heart, right? You're, you're, you're in the midst of, of, of tweaking and, and, and, and, and doing things.
It's that bit about the operating system that you describe. The operating system is [00:19:00] allowing that to happen.
Yes.
And commercial advantage is the, the thing that people are seeking and therefore they're moving at pace with huge funding.
Exactly. So that thing about do we need to rebalance the operating model slightly because sometimes the capitalist nature of organizations might run too fast and we might create something we didn't realize what we were going to do, et cetera. Well, you know, once the genie's outta the bowl, that's it.
It's gone. So there's this bit about, I go back to each other. I really like the way you described that and it's, it feels like maybe we need to observe the operating model a bit to understand the outcomes that it's creating, to see where we're actually gonna end up.
Yeah, I mean, who, who, who is we in that play?
I mean, uh, do you wanna miss out on a winner take all market? That's, I mean, uh, just a couple of months ago or a year ago, we were saying, you know, don't hook this thing up to internet. You know, don't, don't have, uh, instant access to ai. Mm-hmm. I mean, now the, the borders are down, the regulation is down, security, everything is just, [00:20:00] you know, it's mayhem.
It's just basically real time analysis. If you go to extra whatever you see every prompt underneath the post, please check this against the tropic, uh, rock, uh, geo objectivity. I mean, people are not commenting based on their own belief system. They're just checking it against the AI and it's real time. So, yeah, from that perspective, uh, it's, it's very powerful economically speaking, even though you cannot anticipate all the business model.
Coming out of them. But if you look at how it's been built, you look at, you know, creating world models where you have definitions of physics and structures that you can relate to that interact with an LLM. And, but you also see these world models doing more abstract things and making predictions and, um, you know, the whole essence of now the reasoning models just being the main core of the, the new updates where you have symbolic reasoning and you have the knowledge graphs and you have all these tied to, uh, the LLM [00:21:00] tools.
So you, you, you outsource to some math genius or physics genius and you come back and you question, you see the reasoning and you see how these agents or models interact. It's kind of, sort of. It gives you a feeling of, you know, this is beyond anything that I could do and my peers and my colleagues.
There are obviously things that are, I just, this, this week I had, um, a setup in chess, um, where I got to a position that, um, was not something that, that, that open a, I could not analyze. Uh, and it was just coming up with all these crazy ideas and I had to correct it. And I was just playing with, you know, how to get out of it.
And you see that this is something that no human being that has. Any understanding of a chess board would come to a so stupid conclusion. So there are like things that gives you hope, but on the other hand it's insanely powerful and the speed. Um, and I think that has a [00:22:00] lot to do with that, um, that, that capital and the brain power coming in.
So yeah. Do we need to align? Do we need an upgrade to the capitalistic model? Do we need to look at values and regulations and ethics? Yes. But who are we coming back to your
point, uh, nefarious states won't apply to our new operating model. Exactly. Well, there's the thing, and you, I just go back to your point as, uh, I was in a conversation the other day with drug discovery and the comment was, we're using ai.
Two and interfaces like you described, and it's coming up with ideas for, uh, structures that we would've never have thought of. Mm-hmm. And so rather than 10, we're down to two and those two are dramatically more successful. So as simple as leading the human mind down a different path, uh, and the point where it sends you is an extremely powerful thing.
'cause you cut out so much waste and you dramatically improve the quality. So that's a simple application of it now, which everybody's doing. It's just simply prompting the [00:23:00] human to think slightly differently. Like you say in the chess problem,
there's a couple of recent quotes by, I think there were ex Google engineers, but there could have been open ai.
I can't actually remember one of the quotes was. That we don't quite understand how it's getting to some of the conclusions that it's, that it's coming to, and that there is, you know, some magical element to how it's, to how it's, you know, kind of making some decisions or getting to some conclusions. And then of course, there is the more famous Google engineer who sort of resigned and sort of publicly quoted as regretting his life's work in ai.
I wonder, I wonder if you've brought those two events together, Anders, and, and sort of what you make of mm-hmm. The magic of it for want of a better term. I,
I, think there is a, a very strong argument, um, on a general note that we don't get it. I mean, and I say we, uh, not my, because I, I'm not, you know, an [00:24:00] expert on, into that level in that field, but I think, you know, humanity, uh, any expert, any engineer, does not get the full grasp on how these models now come to a particular, um, result.
And I think that's a, that's a fairly. Commonly understood way to look at it. I would, I can take an, a classical example here that is many years old by now with Alpha Zero, the chess engine coming back to, you know, the great Magnus cults in my country, man from Norway playing chess. And the engineers were looking at how.
Alpha Zero trained against itself and played chess. And they didn't get how they came to those conclusions and how to play. And Magnus was one of the first to adopt, uh, to, um, to these models. And now every young chess player is placed more like a robot and a machine than it does like, based on conventional theory.
And Magnus played the world championship and all of a sudden on the h um, row, he, he did a palm move that was a losing move according to stock fish, the [00:25:00] supercomputer that he was created by humans and, and all the games that have ever been played. That's a stupid move, uh, based on conventional theory.
Mm-hmm.
Uh, but Magnus played in the World Champion ship and eight, nine, um, moves later down, down the line. He had an advantage and everyone came back and said, well, he messed up, but he recovered. How did he do that?
Right?
And he was asked, uh, you know, what was that move that you did, you know, that was an, that, was that a mistake?
And he said, well, now that was an Alpha zero move. I said, what is that like? And, and they said like, I have no idea. It just wins. I love that. I dunno why. I just know I'm gonna win. Yeah. And and that is, that is, that is a very good analogy to where we are. I idea how, how it came to that conclusion. And, uh, coming back to your, your point, previous point on, uh, the material design and, and the progress here, I predict, and I don't, I have a lot of thoughts behind this, but I don't need it because I just look at progress.
Most likely our periodic table [00:26:00] will be, uh, enhanced by a bunch of new elements. Moving from 118 hundred 19 now to 130, maybe even 140, that will be, um, discovered by algorithms. Um, if we wanna put that into a, uh, also some kind of quantum computer kind of state that is, you know, in, in midterm future, um, then you will have those that you can run in real time.
All this simulation that would've taken years or decades to figure out. And you come out with all kinds of new vaccines and ways to do things in a different matter. And these type of progress is to me, much more likely than stagnation because we think we, we think about barriers, we think about chip design, we think about energy restrictions.
But it seems to continue, and this is an argument that I think David Deutsch. Uh, or one of the geniuses of our time put in this book, the beginning of Infinity, that we have an almost infinite capability of [00:27:00] progress. And, and that's what we're seeing right now, that this progress is speeding up and coming back to my previous argument, the differences now that we had a time where we could error.
Correct. So we screwed up, we built technology, we strived towards progress, we pushed the borders, we made mistakes, and then we came back and error corrected. It seems now though, that since we have handed this over to a lot of algorithmic tasks, that we need to figure out how to get it right the first time.
Yeah. Yeah. So,
uh, your argument, Dave, on, on, you know, slowing down or capitalistic, I think it's a fundamental thing that we need some kind of upgrade to the capitalistic model, to a humane capitalism where the question is not. Is it possible? The question should be, should we do it? What kind of future is worth striving for?
If everything is possible, if, um, science and fictions or enlightenment and [00:28:00] methodology and folklore literally merges.
Yeah. What and what and what do you make of, I think this is, I think this is connected, but maybe coming at your point from a slightly different angle. Um, one of the books I've, I found quite influential when, when we first started to talk about AI on the show for example, was have you, have you read Tegmarks Life 3.0 and his sort of thesis that we are Life 2.0 eg.
We can teach each other languages and we can, you know, we can repair certain elements of our bodies. Uh, but what we can't do yet is download our consciousness or everything that we know. Uh, we, we are sort of. We're finite as beings and he was positing that 3.0 would be a carb, silicon based life form that of course can download everything that it knows.
It can also take on different forms, and it could be a spacecraft or it could be a, you know, a car or whatever it, whatever, like a trans, like a transformer. [00:29:00] What's your take on that, like evolutionary point around AI versus it, something that's exterior to us? Is it something that actually is going to follow us and be the version of the humans that go out into spares, for example?
Yeah, I mean that, that's the big question that basically then goes into the realm of life. Uh, uh, Le Kite, what does it mean to be alive and have a perception of it, which is coming into consciousness, you know, and, and saying that, um, what does it mean, you know, that to perceive something, the, this, uh, subjective experience, and I think that's very fundamental, at least in my world, is like everything around me could be a simulation.
And it's very, as we said in the X markina, the only restriction. If you look at the math and the quantum physics, um, uh, you, you look at the only thing that right now holds us back is entropy. So the argument is that if we create a, we could put one simulation of our universe [00:30:00] with all the atoms in our observable universe, we could just, you know, narrow down the focus of where you look like, kinda like a computer game you just render.
And that wouldn't be a very. Problematic based on the anthropy in the universe. The challenge is if there is, um, um, an, a simulation within a simulation at infinitum, so there are 8 billion people creating simulations that create simulations in the simulation. Mm-hmm. If the base computer, uh, a base the same physical loss and there is no external, uh, intervention, um, God or Creator or whatever, um, at one point this machine would come to an halt.
You know, it could be a new big bang in a new recreation moment or whatever you wanna look at, but that's the challenge that the entropy which, and hold as a, as a limiting factor, you can say, okay, can you play with entropy? Can you reverse, can you reduce, um, there are papers and theories and most conventional scientists would say that's not possible.
But that's kind of, sort of where we are [00:31:00] on, uh, on that. And, and what does that mean in regards to your question? It, it basically puts you in that. Topic of, you know, are we a part of a simulation chain? Are we at all real? But what you cannot take away is the realness of the experience to me talking into a computer and have a subjective, conscious experience of just that.
And could we put that in a bottle or a rock or some kind of cosmic con consciousness? That's a question that is now, um, heavily debated with various looks at it that Don Hoffman takes, uh, an approach and says, you know, consciousness is fundamental and physical reality evolves out of that, you have illusionists to say that consciousness, uh, any consciousness is just an illusion.
You have, uh, you know, the classical mind body problem, whether they're not interconnected and you have a challenge. Um, there are views. You have panists looking at the world that everything has some kind of consciousness and pie, that there is a different layers of [00:32:00] consciousness. Um, and all of these. Our various views, but it's not, I mean, max Teig Mark's point is, is very plausible to that extent that, you know, if it's a part of our physical world and we can get it, then obviously we can also, you know, replicate it or upload it or download it.
Um, we don't need to be in this finite body. Um, but again, you could also choose to have like nano technologies to replace cells and you have all the longevity dreamers that you could basically, yeah,
I I think they should get a crack on with that actually. That technology would be really helpful, wouldn't it?
And did, did you see, this is a bit of an aside, to your point, the, the longevity point. Did you see the documentary on the Netflix about that guy who's basically making his life a misery? 'cause he wants to add like five years on the end of his seen.
Why? Well, there motivation drives you to that. Yeah. Is it literally, I want five years existence, so I'm gonna wreck x. Is that the, the 80 years [00:33:00] running up to it are gonna be awful, but I might get another couple on the end.
So my, yeah. My take is here. Uh, give me 10 more years between 30 and 50. Yeah, That's the prime. Yeah. Yeah. Like Rob and Bar kind of don't gimme the 10 years, you know, laying in bed alone from a hundred to 110. I don't need those. Right. Yeah,yeah. No, you don't want those years now. 10 your whole life not experience with optimizing for getting the Yeah, getting the rubbish 10.
Yeah. Get the rubbish. So, no, but there are, I mean, I think the current numbers, and I think this is something that, uh, I'm not an expert on, but, um, I think that the current numbers that they communicate at least is that, uh, right now, uh, they win four months in a year. So they have slow down progress and, and the age for aging process, and they believe that within three to five years, they will get it down to a zero.
It means you will stop aging.
Oh, you know what the worst part about that discovery is for me is it's arrived just a little bit too late. [00:34:00] So I would've left to pause my life 25 years ago that, okay, let's put that that take two things. First of all, that's your limiting, uh, way of looking at it because if you could slow it down, why can't you reverse it?
That's a Oh yeah, okay. Yeah, yeah, yeah. Why don't you not see infrared? If you can hear better, why don't hear like, whatever. What, what you, that, that's something that you could look at just downloads on your brain app to get enhanced qualities, right? So why shouldn't you be able to reverse it? That's, that's one part.
But the other part is if you were to pick. One age where you would remain forever. What would that be? 28. Oh, I know its 28. I was gonna say, I thought about this. Where is that? 28.That was, yeah, 28. I thought about this the other day. It's, it's like a recent conversation I had in my head. Is that like a physical 28 or is it at you as you were at 28 with your life?
Uh, it was, it was, you still had the bouncy flexibility of youth, uh, responsibility hadn't quite [00:35:00] kicked in fully and there was enough, uh, disposable income to have fun. And those were my three criteria. I say actually. That's the age. So then if, if you could extend me at that age and I could increase disposable income, uh, uh, you know, and do all the things that I wanted to do, then I think you're onto it.
Mir, how did you get to 28?
You, it, it's still, you're not 30 yet, so you the 30. Yeah. That dilemma of, uh, what are the, the big, the big decisions in life and you're away from the struggle in your twenties. Am I good enough? Am I likable enough? You know, the struggle, uh, Rob still thinks about those things though.
I do. This is what goes through my mind when I'm bored. Sat in a, sat in a waiting room somewhere. Yeah. And so
where did you get to? What was your age?
No, IIII have no, I actually, I have no idea, but I, I, I kind of like my age right now. And I think, you know, it is the beauty and as I think Rob was aiming that what we, we get into that existential, um, [00:36:00] question then.
So say when the ans. And the classical existentialism of the 20th century to fair death and the ude of life itself. If that is taken out of your life, what then will happen? We know, um, there are many path to this brave new world, Aldo Huxley and, and followed up by Neil Postman with we are amusing ourselves to death.
You know, this is kind of, you know, there is nothing, uh, other than consumption and pleasures, and I've been playing with that in, um, a new kind of existentialism that I call unad. So it would be kind of, sort of the, um. Enhanced version of moving towards some kind of philosophical zombie stage. Right. Right.
Having the same conversation and the lights are on, but there are no one home to perceive them.
Mm-hmm.
So you have narcissus and the Greek methodology, the beauty of this entity, this creature is there and it mirrors beautifully in the water, but there is no perceiver of that [00:37:00] perception of that image.
And that is kind of, sort of my analogy to this undead state of reaction is where we just consume and react and there is no inner drive or, or passion or some kind of urgency to create.
I sort of feel like that on a Tuesday. That's my, that's my idea when, you know the things are happening around me, but you know, not taking them in.
It plays to a really interesting point though. I mean, the whole conversation we've just had is. You know how, how should you live your life and what's the best way to do it? You've got the greater good argument. You've got the self boasting argument. What's the bit where you're most happy and content?
'cause a lot of people sort of rage against the machine internally and such like, and I don't think we've ever worked it out. But then if you extended life indefinitely and could reverse it and say you could vary where you were on the age spectrum, your consciousness is still aging. Even though your body might be moving about I, does that actually increase happiness or not?
Oh, it's not. I dunno. If you could live forever, do, is that a real [00:38:00] do I mean, would you become so utterly bored? 'cause you experienced everything? Would you then just sit and cogitate? I dunno, that was the thing about if you could go on.
So, so my take on this, we urge to life and search some kind of, you know, purpose or some kind.
And we strive for happiness. And I think, you know, we, we, we, we chemically speak and we can make ourself happy. You just be, means something and, and have that the feeling of happiness. You could do that technologically speaking, right? But from a natural, like biological, I say evolutionary perspective, the purpose, we cannot make ourself happy, but we can make ourself unhappy, less unhappy by detaching ourself from things that are holding us back, dragging us down, whatever.
And by thin slicing that you put yourself in a position to be struck by something called happiness. Yeah. So you have the experience of the happiness and that's the magic. And the same goes for the purpose part. So the ude of a purpose. I think that's, you know, life is kind of, sort of [00:39:00] wonderful as it is with its struggles and, and existential, you know, fights and whatever.
But the, uh, it's kinda sort like a wonderful journey to nowhere. And in that journey, uh, you feel your life. So you consciously do something active in order to experience that action. And by experiencing that action, you give purpose to your life. So it's about filling your life in order to have a fulfilled life.
So the less you are driven by reaction, you are driven from external factors. Are you liked? How many followers do you have? The next task, the zoom meeting, whatever, the less you have an agency of your life. So you don't have action, you have reaction.
Mm-hmm.
The reaction makes the tired, worn out, depressive, and you feel the overwhelming weight on your shoulders.
But if you have an intrinsic [00:40:00] motivation and you feel life with an activation, then you have a fulfilled life. And I think this is one of the big existential threats that we have with technology today. If we can have everything, what remains of the Le Kite a wonderful German word, the liveliness of life itself, where you experience the wonders of life as in compares to just live it.
Uh, and I think that is the whole, to me, um, my existentialist thinking, uh, towards. Um, how to avoid the zombie apocalypse. Yeah.
But it's, but technology is just to this point, technology is driving a world that pokes us a lot. So we lose agency rather than gain it. Like you say, so are we in the driving seat or are we being driven?
It's like the thing, and I think it's hard for many to get over to that, to, I'm going to be the proactive nature and drive out and be in control, et cetera. But it's, I mean, it's a very good point, is the, where [00:41:00] happiness comes from, but how society affects that. 'cause we're, we're, we're in that system whe whether we like it or not.
So we have to adapt to it.
Skew as a second back towards the AI aspect of this and, and to just talk about a GI and, and just for the sake of definition, artificial general intelligence is like post singularity. This, you know, a a a sort of self-aware ai and as we've pondered this on the show before, but I'm sure you have a good take on this.
Uh, I'd love to hear it. Which is. Do you think we will realize when a GI happens? So, you know, is it, is it something where, you know, a scientist is intentionally trying to create a GI or does a GI happen out, out with the control of the human?
Yeah. I mean, as I said in the beginning, the term a gi uh, is the challenging part here.
Mm-hmm. So one could almost argue that we have a GI today because a lot of the cognitive Yes, exactly. That we have. So, so is it a year away? Is it [00:42:00] three years away? Whatever. With the current speed of things, most cognitive tasks performed by human beings, if now put into a humanoid robot of the mechanisms that we have seen being created.
Uh, I, think, you know, these comparisons are sooner than later. If we look at. A GI as a definition of some kind of self-awareness in the machine. That's a completely different way of looking at it. But to pond your question on the realization of a GI, um, so if we take as an analogy, if we, if you rebuild the human brain neuron by neuron and we see neuron as a binary zero and one firing neuron, uh, 80 billion for 83 billion neurons in our brain, if we see that and we start to rewire that and hook it up to some kind of a ai ai, you know, we could take, you figure out [00:43:00] thousand neurons and you figure out how 10,000 neurons are interlinked and, and you have a hundred thousand, a million, a hundred million, a billion.
Is there a point in the brain where the conscious experience get lost? I. So you transition into a cloud brain and you have infinite access to free knowledge. You know everything, all your memories, everything is there, and it's put onto a cloud and it's saved down, and you download your thoughts. Is there a perception of this thought came from the cloud or it came from.
Quote unquote somewhere within, you know what, what kind of sparked the magic of the arising of a thought, and this is kind of, sort of this a feeling of saying, coming back to my lights are on, but no one home to perceive them. You know, can we build a Dave that is. A hundred percent identical. I don't by atom.
And he has a brain and everything is a duplicate and it has everything, uh, in distinguish. The jokes are right themselves. Aren, you've just written a brilliant
joke about you wouldn't, [00:44:00] it's not a replica of Dave if there's a functioning brain in him. All right, well let's get that straight off. I mean, I could see there were, there was five two too many jokes were filling my head as you were talking.
Yeah. But you would just put five women in the room and we would all sitting laugh at their own jokes. You know, five Daves,
Hey, so how'd you entertain a Dave? Stick him in a room with four more Daves. What a brilliant idea. What? What a night. What a night it would be. Oh man, how many old fashions
just matter because there's no one there to perceive it.
Right. So you have a lot without experience. No, but that's, I think that's the challenge here. You know, what, what is it to see, you know, um, is it already too late? Is the power. You know, the gene is out on the bottle is the power of this capitalistic drive towards a GI so strong that, you know, we are on some kind of path towards upsell, see, for, for the human.
Uh, so we have the emergence of homo obsoletes that is not needed for any particular task [00:45:00] whatsoever. You know, are we already there or, or are we, you know, seeing the wonderful birth of some kind of access to a new, uh, states of consciousness or some kind of super intelligence that will take care of all the challenges we have with ideologies and climate and poverty and diseases and, and everything.
So it's, it's, it's something that is so extremely rapid and I think everyone that is an expert or a self-proclaimed expert of this field has to be, um, at least. Open to seeing this insane speed over the past weeks and months. Mm-hmm. Uh, and then go back and say, w what, where are the limits here? Are there any, uh, if you are a fiction writer today and you wanna write a movie that takes three years to produce and you wanna bring it out as a Hollywood blockbuster in three years, what the hell do you script?
Yeah.
Yeah. Well, yeah, indeed. And so, but the, to turn that [00:46:00] on its head, what will entertainment become? Right? So three year process, I can perceive a day where you have a chat with your TV and say, I want 10 hours of this, this type of thing. I'm interested in this. You describe what you like, and then it goes and generates you a personalized experience.
It's really the Black
Mirror episode, isn't
it? Yeah. And then it plays it back. Right. And that, that idea of. Uh, it's all totally configured and curated for me means that that concept of building a piece of art and everybody experiences the art, what happens to, to, to that. I always see expression in the way I want to receive it as opposed to somebody else's version of expression.
And I wonder
about that in the future, about will it be so overly customized? We'll never see the beauty of others. Yeah. Uh, being rich. But is this dystopia or utopia because, well, no, exactly. That's the, it's Wally, isn't it? It's like becoming this, this, we're stuck in the system.
But isn't that the magic, you know, Rob, isn't it?
If you were to go to a, a standup comedy show and outline what kind of jokes they were supposed to tell you, right? Yeah. And, [00:47:00] and this is the funny thing about this, is that if you go to your YouTube. A wall right now, and you look at the videos, you find probably, you know, 10 of them that you find interesting.
Yeah. But you wouldn't, if you were to prompt that and say, what do you want to look at? You would never come to that idea. So you are not, you're not at all capable of describing what you want to have. Right. And, and that is, you know, if you describe, if you try to describe what you like. The magic is not in that the magic is found in the experience progress of what you didn't know that you like.
Yeah. Yeah. Ab, absolutely. Yeah. Yeah.
I think that is that, that is like one of the wonders of life. You know, what catches you off guard. It's uh, one of those beautiful sceneries of the human mind to say, okay. When you stand in front of the mirror and you smile without no one watching, you know, and you experience that, you know, that's not I, where did that come from?
You know, what is that? That's, that's the magic. And I think that holds to be true for a lot of fields [00:48:00] where you are surprised that you enjoyed it. You are surprised that laughter, it catches you off guard. And if we were to describe that yourself, then I think it becomes mechanical and then it becomes into that state of self-op optimization.
And it's a kind of, sort of like a reinforcement learning algorithm. Uh, you just optimize your own behavior based on your own criteria. And that's, I think, is the state of being unad.
Well, look, I want bring today's conversation to a bit of a conclusion, but for the first time, also have a bit of a cliffhanger.
It's like a cliffhanger record episode because easy Dave, easy. This is exciting, but I'm glad to say that it's a new experience. Yeah. This, it is a new experience for the podcast, isn't it? This is something, it's cliffhanger. It's the first time we've done one and, uh, and as I'm glad to say, is hopefully gonna join us again for the Christmas episode this year.
So, by way of setting that up, Anders, um, I wanna bring a couple of threads together. I know you've been looking at quantum and AI and the, and how those two things come together, but I wonder if you can frame that by giving us [00:49:00] some thoughts on whether this accelerated rate of technology quantum intersecting with AI at some point in probably the very near future, does that make the chances of us living a sim in a simulation more likely?
Mm-hmm. I would lean towards the latter. And, uh, if, if quantum computing with a million qubit stabilized, just take a number, uh, is in foreseeable future possible, then progress is so rapid that it's, it's not to rule out that, you know, we can simulate these experiences that we have. That's something that I, that I lean towards, uh, today.
Ez what you've been thinking about,
well, at least we're not there yet. Ha. So we're still struggling in the mud in reality. So let's, let's take it from there. [00:50:00] Talking about experiences in teams. This week I hosted several workshops around team behavior and dynamics. And one question that keeps coming up is, who do we see as future leaders in our team?
So, uh, also about. Being prepared for the future. Uh, and just as often I noticed, uh, the, the team was quickly pointing out the loudest voice, uh, the most confident one, the one that was really, you know, up and, and taking it naturally, but quite dominant. Uh, but it made me pause and think about the work of John Maxwell, uh, who says, you have to learn to follow before you can actually be a good leader.
Uh, so, uh, he argues that true leadership begins with followership, with people who take responsibility without needing the spotlight, who serve the team's purpose, ask great questions, lead themselves before they lead others, and think of that quiet colleague who always lifts others up, or the team member who constantly shows up, listens and grows.
Um, so especially now in this area of, of life where we are at with the speed of life, [00:51:00] what would it mean to be a good follower today to become that great leader in the future? What's your thoughts on that?
You know, I, um, a personal reflection on that. I definitely, when I was, when I was like younger and I had as aspirations of being 28
Dave when you were 28.
Maybe even pre that, to be honest with you. Oh, right, okay. Quite, quite, quite, not aware of the world yet. Not aware of the world. Yeah. Yeah. This is horrific. Now you've honesty I'm about to put on the table. But yeah, I definitely had that notion of, to be a leader was you had to be the person at the front of the room or the front of the peloton charging forward, like, and everybody, everybody follow me sort of thing.
And I used to, you know, sort of, I dunno, aspire to be that and then work quite hard to be that. And, uh, it probably, it probably took me 20 years to realize one that's not even the, necessarily the best form of leadership. Also like the, the contribution you can bring to a situation, you can be, [00:52:00] you can do that from the back.
So, so yeah. I think for me that I, I've spent, you know, a lot, a lot of my professional life sort of thinking about that question, and I, I sort of, I'm, I'm all in on the, if you don't understand what followership is and, and how that works and how powerful that can be, yeah. You, you, you, you are only gonna be, you're gonna, you're only gonna be a one dimensional leader, I think without that.
Yeah. And I think also still, are we good followers still? Eh, it's not that you, you stop following when you're a leader.
Yeah, no, absolutely. Uh, especially
if you're in a company that has other leaders above you, uh, are you taking their back? The, you know, are we still cheering them on? Uh, and, and with so many people in companies that have opinions about their leaders, they, they almost dehumanize them.
Uh, like they have to be perfect because every mistake they make, you know, uh, there, there's so much attention to the mistakes they make instead of, well, it must be tough to be in that position, uh, to have those, those broad shoulders that take on everything, [00:53:00] uh, from the leadership.
I think there's a bit there about how you perceived, as well as a leader makes a massive impact on whether or not you're effective.
Yeah. And I think if you haven't sat and perceived other leaders around you and understood what that means to the impact of the people that they. They have leadership over, uh, then you're probably not be as self-aware as you need to be. So it's about you gain the experience. You trying not to the word
ownership there, Rob?
Well, no, I was just trying to, my, Dave, it's early in the morning. My brain isn't quite functioning correctly. This is where the AI should have told me what to say sat next to me. Uh, but the whole thing about if I haven't experienced it and been part of it, how can I ever know how, what, what does good look like?
Yeah. So if you just get thrust into it, you've never seen a good leader, bad leader. We, we've all had seen good leaders, we've all seen bad leaders. So when we get to that position, does that help us behave differently? And I suppose that's a big part of it as well, isn't it? Being part of, um, the experience means that you might discharge the experience better,
but Yeah.
And I [00:54:00] think, you know, this is this one of these interesting aspects when it comes to leadership that even though if you have understood the theory behind it. Some people just seem to get it.
Yeah. Yeah.
It's like something that they, they, they do it. And, and, and the doing part is, is, is very interesting because you could do the same thing with a different result basically.
Right? And that's, that's some of this human magic. And, um, I, I remember this is also many years ago, I, I came up with, um, what I call the H revolution because, uh, um, it was an HR conference and I said, if you look at human as resources that we ought to control and, and dominate, uh, that will be technological.
So I, I played with these four new definitions of the R in hr. So one was human reasoning. So if you are able to reason today enhanced by AI and knowledge obviously, but the reasoning part, the understanding part becomes crucial. So, [00:55:00] um, if the PE person that can reason in the room. It's often listened to.
The other part is, uh, rhetoric. So the ethos, pathos, logos, how you bring things across, when you speak, how you speak, that is something that we relate to how we communicate. The third part I came up with was recognition. So if you recognize, uh, the other people in the room and you uplift them, and, and you, you become that person that can sit at the back, but you recognize what they do.
And, and the fourth part is, is obviously the, the relation part here that you have human relations. And I think moving on that, you know, the foundational level of. Um, kindergarten education, executive education, all the way should be about those components. You know, we need a society of understanding. We don't need experts and knowledge workers that dominate.
We need people that are capable of reasoning. We need people that recognizes more than empathy. The [00:56:00] recognition is the important part. You don't have to feel anything, you just have to recognize it. Kinda like a robot, you know, if I recognize that you're, you're, you're having a bad day. I don't have to feel anything.
I just have to say the right things and recognize you. Oh, I, I wanna challenge you on that. Yes, please. Because I, I've seen quite some booklets given to leaders because they, well, the feedback of employees were, you know, not very, uh, empathetic. So let's give them some examples in a PowerPoint, and then if you have this situation, please say this.
And then, uh, you're all good if then else. You, we all know that I, I hear what you're saying. You're saying the correct words, but I'm not feeling you. And for me, that's, uh, that's even worse than not saying the right words.
Yeah, I get that. But, uh, this is a, a, a, a book, book that could be read on the topic from Paul Bloom is called Against Empathy.
Um, and he argues for, uh, rational compassion. And this is something that I've experienced myself. You know, I have spoken in front of. [00:57:00] Thousands of people, you're exhausted. You play it with your mind, you've done a lot of work and you've been there on stage and you try to have an interaction with all the people in the room, like literally being present.
And people come up to you and say, well, I feel you. I feel your energy. You're so empathic. And I'm saying nothing going on. Uh, it's just, I'm just, I'm just an AI at that moment, but I am capable of recognizing this person. I'm aware and I'm capable of the rhetoric in communicating. And this had made me think that this is, and this is why I have a, and I like your challenge because it's exactly what I get all the time, and I don't have the answer of it.
I just have the experience and I see it all the time. And I recognize people that are, you could call it manipulation, even though it, it might not be conscious manipulation, but I see people that are really good at making other people think that they feel a lot. And this is why I have a challenge with the whole human, [00:58:00] uh, skill part in ai.
You know, can a, a robot be empathic? And I think, um, if it's really good at rational compassion, it's kind of, sort of, if a robot makes you have goosebumps, you cannot goose it away. Mm-hmm. And, and, and that's the level that we're at. So I I, I, I like your challenge here, but I just, I, it, it's also provocative because it, it's, it spins out a lot of what technology is capable of doing.
But these are, these are first 10 experiences that I have had where I honestly, afterwards can say, I didn't feel anything. And this, you know, this person who has had a traumat loss and done all the psychology and is an expert on, on esoteric and spiritualism and feelings and emotions, this person feels me and feel my empathy.
Hmm.
Like, what's going on here? And so, so I'm, I'm, I'm not, I'm not judging on this, I'm just. Throwing out observations that I wanna play with and, and, and, and, uh, bloom with his against empathy made me think [00:59:00] about this. Um, and it's a very interesting topic.
Well, look on that note, let's leave it for today with a big thank you to Anders.
Always great to see you, man. Thank you so much for spending time with us this morning. Thank you for having me, guys. Great. It was a wonderful conversation. A it was indeed wonderful. Now we end every episode of this podcast by asking our guests what they're excited about doing next. That could be something in your personal life, or it could be something in your professional life, or maybe a bit of both.
So, Anders, what are you excited about doing next?
Yeah, right now I'm off to, to speak at a conference here in Austria and address, uh, as leaders. So I wanna talk about values in Europe and the future of Europe. So I'm excited about that. Uh, on the, on an intellectual level, I'm, I'm playing with my, uh, triangular alchemy, also writing a lot at the moment, and I'm very excited about that because I think we need to change how we look at organizations, uh, to find that quote unquote perfect for the human being in the companies and how they're gonna, uh, survive in this very rapid technological era.
So those things are great excitement. [01:00:00] Very cool.
If you would like to discuss any of the issues on this week's show and how they might impact you and your business, please get in touch with us at Cloudrealities@capgemini.com, roll on LinkedIn. We'd love to hear from you, so feel free to connect in DM if you have questions for the show to tackle.
And of course, please rate and subscribe to our podcast. It really helps us improve the show. A huge thanks to Anders, our sound and editing widgets. Ben and Louie, our producer, Marcel, and of course to all our listeners. See you in another reality next [01:01:00] week.