Cloud Realities

Organizational ‘purpose’ can set out a North Star that can create directional alignment, which is especially important in organizations with deep empowerment.  But are purpose statements merely platitudes on a flip chart, or should the focus shift to genuine stories, meaningful patterns, and tangible actions? 

Returning from their festive slumber, Dave, Esmee and Rob talk to Dave Snowden, Founder and Chief Scientific Officer, The Cynefin Centre about purpose, why it may not help, what the alternatives might be and how to have a listening for the ‘micro-narratives’ in your organization.

TLDR:
00:50 Back from the Christmas break!
05:42 Exploring organizational purpose with Dave Snowden
1:00:20 Discovering personal purpose through Ikigai
1:07:55 Celebrating a 21st anniversary and the Children of the World Project
 
Guest:
Dave Snowden: https://www.linkedin.com/in/dave-snowden-2a93b/

Hosts:
Dave Chapman: https://www.linkedin.com/in/chapmandr/
Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/
Guest host Sandeep Kumar: https://www.linkedin.com/in/sandeepkumar99/

Production:
Marcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/
Dave Chapman: https://www.linkedin.com/in/chapmandr/

Sound:
Ben Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/
Louis Corbett:  https://www.linkedin.com/in/louis-corbett-087250264/

'Cloud Realities' is an original podcast from Capgemini

Creators and Guests

Host
Dave Chapman
Chief Cloud Evangelist with nearly 30 years of global experience in strategic development, transformation, program delivery, and operations, I bring a wealth of expertise to the world of cloud innovation. In addition to my professional expertise, I’m the creator and main host of the Cloud Realities podcast, where we explore the transformative power of cloud technology.
Host
Esmee van de Giessen
Principal Consultant Enterprise Transformation and Cloud Realities podcast host, bridges gaps to drive impactful change. With expertise in agile, value delivery, culture, and user adoption, she empowers teams and leaders to ensure technology enhances agility, resilience, and sustainable growth across ecosystems.
Host
Rob Kernahan
VP Chief Architect for Cloud and Cloud Realities podcast host, drives digital transformation by combining deep technical expertise with exceptional client engagement. Passionate about high-performance cultures, he leverages cloud and modern operating models to create low-friction, high-velocity environments that fuel business growth and empower people to thrive.
Producer
Marcel van der Burg
VP Global Marketing and producer of the Cloud Realities podcast, is a strategic marketing leader with 33+ years of experience. He drives global cloud marketing strategies, leveraging creativity, multi-channel expertise, and problem-solving to deliver impactful business growth in complex environments.

What is Cloud Realities?

Exploring the practical and exciting alternate realities that can be unleashed through cloud driven transformation and cloud native living and working.

Each episode, our hosts Dave, Esmee & Rob talk to Cloud leaders and practitioners to understand how previously untapped business value can be released, how to deal with the challenges and risks that come with bold ventures and how does human experience factor into all of this?

They cover Intelligent Industry, Customer Experience, Sustainability, AI, Data and Insight, Cyber, Cost, Leadership, Talent and, of course, Tech.

Together, Dave, Esmee & Rob have over 80 years of cloud and transformation experience and act as our guides though a new reality each week.

Web - https://www.capgemini.com/insights/research-library/cloud-realities-podcast/
Email - Podcasts.cor@capgemini.com

CR087: Does 'purpose' matter with Dave Snowden
[00:00:00] So I'm male and the bike cost a lot of money. So I literally fell sideways and pleated and kicked the bike into the Heather and slid down the road. Yeah. Wow.
You know what this is, you know what this story is begging for Dave, and I think you're the person to do it. One of those LinkedIn long posts where you can tell everybody what lessons you've learned around.
Welcome to cloud realities, an original podcast from Capgemini. And this week we are going to look at this concept of organizational purpose. What is it? Does it have any merit? And are there any alternatives to how we think about aligning our organizations? I'm Dave Chapman.
I'm Esmee van de Giesse. And I'm Rob Kernahan.
And we are back after Christmas. It's our first one back. It is freezing January. Es, did you have a wonderful Christmas? I know you were out traveling the [00:01:00] world.
Yes. So after traveling the world, I got a huge tan, which I'm still exploiting, uh, till today. And, uh, I spent time with family and friends, so it was lovely.
Uh, and also to be in the cold with Christmas still feels like the best way to celebrate Christmas, to be honest.
Yes. You said you were struggling with a sunny Christmas. Yep. Uh, when did you get back? Uh, like the day before Christmas, actually. Oh, right. So you actually, I didn't realize you were actually back in Europe for Christmas itself.
Yeah. Yeah. Just think about the poor Australians who have a sunny Christmas every year. I always thought that would be weird, Christmas tree in the corner, but the barbecue's out going and the sun's blazing.
It doesn't work for us at least I think because we're so used to, you know, put it, being able to put on skis or ice skating and uh, yeah, it's just that warm Christmas feeling. See, I was all right with it. I've experienced it on a couple of times in my life, I'm happy to say. And the only thing that surprised me, like I think we talked about in the Christmas episode was the fact that the. haven't updated any Christmas iconography to represent the weather [00:02:00] in their particular region. So like, you know, snowmen, yeah, Santa in a giant, you know, kind of warming jacket and snowmen and things like that. That was the only sort of dissonance that I had. I was actually okay with like a barbecue for Christmas dinner. I like getting a Santan on Christmas day. But did you do this Christmas then? So no, I was in the UK this year. So I always, I, you know, I think it was, Uh, hosting a lot of people and it's lovely actually. It's very family oriented. Enjoyed it quite a bit. How about you, Robert? How are you doing? Did you enjoy it? I was all right. A quiet Christmas. Tried to get away. to go skiing for New Year's and travel chaos ensued, which was interesting. I spent far too long at the airport. It was spectacular, it's fair to say. So before you tell the story, it's probably worth saying that when Rob went on his summer holiday, he was stuck on a plane without air conditioning for, what was it, five hours, Rob? Five, six hours, for two days, and then eventually the plane worked and off we went, but the flight was cancelled. And then [00:03:00] you had to, overnight, and then you had to go back the next day. That's right. And we were like, ooh, God, mate, that was unlucky, wasn't it? That, surely, surely, it's never going to happen again, is it? So, Rob, what happened this time?
Uh, well, the first thing that happened was we tried to get on the plane and there was fog. So, the flight got cancelled after we sat on the plane for a few hours. We couldn't get a slot. So, that was, uh, an overnighter. Family into hotel, then family back out of hotel. We eventually got to our destination at 8. Lost a day's skiing. And then on the way back, air traffic control decided they didn't have enough people and our flight was cancelled again.
So another overnighter, and then back to the airport, and then eventually, after many, many hours delay on a flight that should have left, hours before we eventually got home, so I must admit, in the last four family flights, three have been cancelled, so I'm pretty convinced one member of my family is cursed with flying. It is rough, and with all your travel preparations as well. I know! It doesn't matter, it doesn't count for anything when the, uh, it's all like, you know, Fog or traffic [00:04:00] control, you can't do anything about. It feels, it feels deeply karmic, I have to say, given the amount of effort and anxiety you put into traveling, that something like that would happen. I go outta my way to try and make it smooth and buy the numbers and this was anything. Um, but, but the bit in the middle was very, very good. So there you go. It is justice though. I mean, we, yeah. Sorry. I climb mountains. We have a phrase. If you don't climb up, you shouldn't be allowed to slide down.
Oh, just because I, using the end, the excellence of engineering to allow me to get to the top much faster. Yeah. It damages the mountain side. Yeah. There we go. So like you've heard him. So let me quickly introduce Dave Snowden. Dave, it is great to see you today. How are you doing? I'm fine. Thanks. Yeah. It is very nice to see you. How was your Christmas? Well, we have three families involved in a new grandchild who's now just over a year old.
So you have to wait for your allocated day. The magic of Christmas comes back. Once you've got your allocated day for the grandchild, you work everything else around it. So I went, [00:05:00] went to the Lake District for a week before Christmas and then a week after and went up. to a Rari for three days between Christmas and the New Year. So I tried to spend my time in the mountains when I've got time. That's gorgeous. That's gorgeous. The Lake District, obviously particularly beautiful for that kind of thing, but also a really good foodie scene in the lakes these days. Do you engage in any of that?
No, because I'm on, I'm, I did my first wainwright round in 40 days. And I'm now doing my second one in 45. So if I'm up there, it's sort of get up, walk with a head torch, finish with a head torch and grab something when you get back to the cottage before starting again the next day. So Wow. Good going. Good going. So Dave has joined us today. He runs the Cunefin Center and we've talked to Dave on the show before and we've used his thinking on the show, or at least attempted to before. So we're delighted to have Dave join us for this conversation, which we will get to shortly. But before that. What are you confused about this week, Rob? Well, David, it's a good one this [00:06:00] week, which was, if you remember in the Christmas episode, one of the things we predicted in 2025 was the AI bloopers, and it would be, you know, they're coming. Oh, I know where you're going with this. I know exactly where you're going with this. So, uh, lots of controversy with people getting news alerts that AI had consolidated on their phone, which was taking a very trusted editorial source and corrupting the news to almost be completely different. So people are getting these AI notifications on their phones and they're just blatantly wrong and telling news that just isn't true.
And there's been a lot of discussion about this, especially with the new sources. The BBC was a big one. Do you have, before you go on, do you have a specific example? Like did it literally turn around, you know, kind of 180 degrees the news or did it just slightly tweak it to a point where it lost its speed? meaning. No, no, it created news about the story that was completely and totally incorrect. Like the individual in custody had died or that somebody had won the championship long before the final had ever [00:07:00] taken account. So it was actually just make it hallucinated to a level where I was making very significant statements about a new story that just weren't true. So this is, this is bad. This is bad because people trust. Yeah. the Apple platform, people trust these edited news sources, and then suddenly there's this corruption coming through, which is, you know, so my confusion is around, is this a massive incident that's going to affect trust in AI with people? Because this was very widely reported as being just completely wrong. And is it going to cause an impact on society's adoption of it? Or is it just a blooper? And we'll get over it and we'll move on. And the confusion is about, has it dented people's opinion of what AI is? Because a lot of people still don't properly understand AI.
This was their first foray into it. The system's doing it for, you know, for them, all that sort of stuff. They didn't really have to do anything other than accept the terms and conditions to switch it on. Has this completely knackered the trust or is it people who go, yeah, whatever, let's move on now? And that's what I'm confused about. I gotta say, I'm solidly in the latter [00:08:00] camp on that one. I honestly feel just go, Hey, move on. Okay, you know, one of those things and I don't think they'll, I mean, some people will, but I think it will be a, it will be a smaller minority than you would ideally want that would, that would lean into some of the issues that what you've just described might suggest for us going forward. I think the majority of people, it will just pass them, but they might see it trending on X or something and be like, ah, you know, have a quick look at it and then move on. Well, it's interesting that because you look at the, I mean, it did really just make stuff up. It was really bad. It wasn't like a subtle fact change. It was like a massive shift in the story. Was it one or two stories or was it like tens or hundreds?
No, it was one or two very big ones and the way they stitched it together, it obviously just messed everything up and it shows you that. They've rushed to implement it. What, what has this gone through from a QA perspective?
Have they really, do they really understand how it's working or is it just the AI arms race meant we have to get it out fast and it's, you know, damaged opinion? [00:09:00] I think that, I think the arms race is, is certainly a factor in why something like that is out now. It did it. Just in terms of how it functions, it seems jared to me that platform, to be honest with you, as if you read that, what would it do to your level of confidence? Is it something you'd be like, I will fix this going forward? Or do you think you would switch it off?
Well, it actually makes me think about when first in in digital customer experiences. Don't know if you remember that. There were at least people thinking that you get special discounts if you had an Apple or have lower discounts because you were the ones, you know, if you have an Apple more Mac book, then well, presumably you have more money to spend.
Uh, and then you also saw some advertisement. I don't know if you remember that, but somebody was Googling for a baby stuff and then some, the, the dad actually got advertisement with a lot of baby stuff and that went completely wrong. Uh, and I think those were [00:10:00] stories that were really in the news, or at least in on top of the mind of a lot of, of our customers.
And then suddenly it just, it went to the background because now we have smarter and no, we're not doing that anymore. We have extra control mechanisms in there. And then I think you will be convinced that it's safe enough, but it depends on how many more stories are, are, you know. Speeding up in our, uh, and what scale, I think, yeah, presumably they'll now it's been stopped and they've actually been fined or whatever. Naturally, they'll wind that back. But Dave, we've never had a chance to talk about the rise of AI because I think even just as you came on the show last time we were in very early days of chat GPT and obviously it's accelerated at an absolutely crazy rate over the last 18 months or so. What's your perspective on Rob's observation there?
I think, well, I mean, there was a paper I read recently, which is really good. It says human beings hallucinate AI bullshits. Right, right.
Um, but [00:11:00] actually hallucination is key to human decision making. So I think there's a couple of things. First of all, AI is entirely based on text and tokens. So that's about 5 to 10 percent of what human beings know anyway.
Um, and so if you can find intelligence and it's not intelligence, it's influence, right? Yeah, the training data sets determining what you get, the training data sets are all text. Yeah, most of the time, I mean, for example, we use pheromones to determine trust, not what people say. So it's far more to human decision making than we think, right?
So that's one problem, and the danger is we're going to dumb people down. The danger with AI is not that it exceeds us intelligence, but we become hyper dependent on it. And so there's already evidence, there's loads of papers coming out which show how we lose cognitive capability if we use AI.
Ooh, is that a bit like, I had a, I had a, uh, a knee injury over the summer last year, and it was terrifying how fast my [00:12:00] thigh muscle atrophied.
Got it.
And it's the same principle. All right. So, I mean, I can smell snow coming. My children can't, I grew up in the country. It's that sort of thing. So I think that's a real danger. Right. And there was somebody who regards himself as a thought leader. He said he's spending six to seven hours a day. On AI and it's like, well, that just, I mean, my view is, um, you know, that there's consultancy studies on this, which show it increased productivity.
If you had increased productivity using AI, it means your consultancy was pretty crap in the first place. Yeah. So, I mean, I use, I mean, chat GPT is a much better search mechanism than Google. No question. It's a, it's a bit. But the energy consumption is really scary. Oh, yeah. Yeah Yeah 10x and also, you know, you can find things in books so that the danger is and you can't trust the data anymore Well, it's fairly easy to see what it is and the pictures are so stereotypical So I think the trouble is the dumbing down of intelligence which ai causes [00:13:00] is people aren't so I think Companies that maintain human decision making will have competitive advantage, and it's very significant that the American elite schools where the tech bros send their children have banned technology from the classroom.
That is highly significant, isn't it? I remember you mentioned that last time we spoke, and I've talked about that a lot since because it's it's extremely meaningful. I wouldn't mind digging into that just for a second, actually. And from what you've seen, what are they varying in their teaching? After they've got computers out of the classroom. My read on it was presumably they think like information age teaching is broadly done. It's
how you find things. I mean, I mean, I mark up books when I write them, when I read them, I had to find something. I remember something vaguely from university last year. And I found the book within two seconds. I found what I was looking for. Yeah, because I've marked up the book with different colored pens. My, my fingers have memories. I [00:14:00] sort of remember the context. Human memory is very fragmented like that. And that's the search mechanism, right? If I just did keyword search of, yeah, then I would only find things which match what the word said. Yeah, and that's a real problem, right? So I think we're letting a tool go out of hand. It's sorry if I'm going to be really rude. Most of the AI. I mean, it's been around for 30 40 years. Yeah.
And I remember saying 20 years ago is what matters is the training data sets, not the algorithms. So the only thing that's just changed is the ability to scrape more data and create more sophisticated training data sets.
The algorithms haven't changed much, right? And, you know, you need to use these things as tools, not bio re engineer the brain to fit the tools. We've allowed consultancy to get to the point where all you do is regurgitate material from other projects. Well, AI is always going to do that better. Right, right.Very good, very good thought. [00:15:00] So, Rob, has that helped you out? No, it's opened up a whole more set of can of worms, quite frankly. I think we've done quite a good job with that. We shall return to this one, I think, because I'm pretty confident there'll be more bloopers coming. I was just interested to see how society reacts to it. Well, I think we've got, I think The discussion there was an appropriately messy discussion about an appropriately about a messy subject. It's probably what I would say. And yeah, not far from untangled. But look, thank you for that. Let's move on to our conversation today, which is going to be about organizational purpose. Quite a bit. So Dave, how's life in the Kneffen Center? What have you guys been up to? A whole range of things. So we're working a lot on OD and culture measure. Well, we just launched a A new sort of standard offer around measuring culture, which is using cartoons as a trigger point. So that's built on semiotics How does that work then just so we we got to grinding one with gaping void the other with comic agile, [00:16:00] right? So you have six cartoons you choose the cartoon which represents your culture. You tell a story about why you then index that onto a set of indexes derived from cultural anthropology And that all costs you, you know, less than a thousand pounds for three months and you get a standard report every week.
So what it shows you is culture represented as, um, as collections of narrative. Okay. And the cartoons are really powerful because they trigger people to think differently. Interesting. That sounds quite interesting actually. And there's that, that must, I mean As you deployed it, that must open a lot of eyes internally about how the culture actually works within an organization with leadership.
Is there a lot of surprise in that process or do people really always do? No, we've been doing it for years. The trouble is it takes a lot of persuasion. So one of the things we've done is to put the automation in so people can now buy it with a credit card rather than run a project and just play with it. Um, we just. We got another one out at the moment on AI and the role of AI in your company [00:17:00] and again, the idea is gather people's real world experiences, get them using high abstraction metadata. So we're avoiding text right now that human beings don't work well with text and tokens. AI does human beings don't. So we're using abstractions and what that allows us to do is to create narrative maps, which if you want the technical languages and assemblage or a strange attractor. So it's a pattern of belief or attitude, which you can measure at scale. So for example, I can produce the map for the whole company, right?
And the executives can look at the map and say, well, look, we'd like more stories like these and fewer stories like those. Which, by the way, is one of the ways you create a purpose, yeah? So I need more like this, fewer like that. Not, I need to achieve this goal. So that's concrete. But then each division or each work group has their own map from the source data. So every map actually looks different. So everybody's going in a different direction. But the system is starting to align. And that's called fractal engagement. What you don't want is generic company wide programs [00:18:00] because yeah, that that's homogenization. It's the lowest common denominator The thing that it immediately suggests to me is how much of them come out looking like a Dilbert cartoon We actually pre select the cartoon so you can if you want customize that so I'll send you the link So Comeback Agile have got some, I mean, the thing about cartoons is they can say really hard things in a way that you have to laugh. Yeah, that you can swallow them a little bit easier. But you can't swallow them easier. So Gaping Void is the other one. So for non agile group, we've got the Gaping Void cartoons, which are really harsh. I mean, there's some on the wall behind me there, right? Including my favorite of all time, which is the one with the red cross and the slogan same cross different nails And that makes you think so cartoons This is called base 101 cognitive neuroscience.
We do not think about a subject unless we detect an anomaly So what you do, what cartoons do is they create anomalies. So you start to think, [00:19:00] whereas if you're responding to a questionnaire with standard questions and like card scales, you don't have to think about it. You just go, you just mark, mark it as you go. It's quite interesting. Does that feel very different for the organizations you interact with around it? Is it a big leap for them to start to work in this way and introduce things like that? It is. That's one of the reasons why we've produced a sub 1000 pounds, buy it and see, so people can. What we found is when people will have their own narrative, they get it instantly when they just have demonstrations using other people's narrative, that's more difficult.So what you're doing here is you're creating a very high level of intimacy for sea level, and you're creating a new mechanism of communication because they can say, well, I want more stories like these and fewer stories like those. Yeah, those are real stories from the workforce, not something which the executive is creating or a big consultancy is creating for them. So it changes the dynamics of communication.
Well, let's maybe use that as a bridge then into the subject that we're gonna hopefully [00:20:00] interrogate a little bit today. So the background to this session that we're doing today is I. You know, like I said, we've had Dave on the show before, we've had a couple of interactions and I follow him on LinkedIn and a few other things and I'd noticed that he'd been posting a little bit on purpose and responding to some posts on organizational purpose that others were making.
And very briefly, organizational purpose is set out to be something that You know, gives it gives an organization potentially a reason to exist. It potentially guides that organization in how it achieves its goals and the direction it wants to go in. It can be sometimes called things like a North Star.
And the idea is that an organization sets down what it believes in. And heads in that direction and it's these things can be very easily understood in founder based organizations where generally a person has started a company because they want to do a particular thing and that thing can be the purpose of the organization or to the other end of the spectrum i think [00:21:00] google has set out its purpose or it did previously to make all of the world's information searchable and.
We think it's interesting in the context of what we talk about on the show sometimes around organizational change and sort of new ways of working if you like is that it can in a very empowered and decentralized organization having unified in purpose or direction can help an organization broadly head in the head in the right direction but without over micro managing everybody to do. Certain things and David been quote unquote playing around with the idea of alignment based on rich local interactions to achieve coherence in organizations as an alternative to purpose statements north stars and the like so Dave before we dive into. The depths of where you've got to with your thinking on it.
For you, what's the role of purpose in a business? Or feel free to replace the word purpose with any, any word of your choosing. To generate money for [00:22:00] consultants every two to three years, to get a workshop of senior executives, produce a list of platitudes on the flip chart, and tell So it has got value then, that's what you're saying.
Also All the political players will then do find and replace on their word processing document to change whatever they were going to do anyway, to use a new language of power. And so we went from value statements, mission statements, purpose statements. We now got the purpose. I mean, that books out. All right. I mean, it's just complete and utter crap, right? Nobody actually has a purpose statement for their family or for their friends. Right. And there's a really good test. If you wouldn't do it for your extended family and friends, why the hell are you doing it for your employees?
Now, do you see the point about being a start up?
If you start a business, you're doing that for a particular reason, you would think. Like, what's the, what would you say the purpose for the Knafen Center is? Okay. Uh, we would never write it down. We wouldn't be that hypocritical. I think the other thing is [00:23:00] we are, we kind of like mostly understand it. I mean, but purpose is generally, if you, if you look in hunter gatherer communities, they don't have purpose statements, but people understand what they're trying to do together.
And that contains what we call requisite ambiguity. Um, part of the problem we got for the last 20 or 30 years, and you can blame system syncing for this. Because going back to Sengi, right, um, who popularized it for the first time, the idea is you handle uncertainty by defining where you want to be. Right, right. And of course, then you define where you want to be in text. In order to get consensus across all the players, you end up with platitudes.
There's nothing really concrete in it. It's just a generalized statement of goodness. So, you know, there's, I sometimes call these 42 solutions. Which is a reference to hitchhiker's guide to the galaxy and the entry for earth mostly harmless. All right But the danger is they I mean, I still remember this when I was in ibm I was at the board meeting and some you know, we changed the brand [00:24:00] And the brand went from e this to on demand. All right, and I got asked by somebody very senior What did I think of this from a complexity perspective? And I was really irritated because I spent nine months of my life working on this bloody thing with Ogilvy and Mather and it was obvious my boss's boss would not give me any of the credit for it.
Never upset the Welsh or the Irish. We get really angry when we get this, alright? And we go into suicide mode. And I said, I remember saying, it's a great way of finding out who you're going to fire. And there was this silence and somebody said, what do you mean by that buddy? A buddy is a danger sign with American sea level, right? It's like a rattlesnake. And I said, well, anybody who's changed their slide slip from e this to on demand this in six months is obviously a game playing sycophant. So I assume you're going to fire them. Roar of laughter because of course, everybody at Vanderbilt has done exactly that. Right. Because they know how to play the game.
All right. And you know, at least one third of them loved me for it and took me out for [00:25:00] dinner. The other third hated me and the third were confused, but that's a reasonable ratio. Yeah. So, I mean, and anything explicit will be gained. So you have several problems. First of all, if you're dealing with a complex adaptive system, which you are, you shouldn't try and define the future state because you're bound to get it wrong.
Yeah, what you need to do is define where you are and define a direction or directions of travel. You may actually need to have parallel paths or different paths and see what works or doesn't work before you commit. So it's more important to understand the present than it is to actually have. Patterns of the future and this is 101 physics. All right, whatever has the lowest energy gradient is probably going to happen So if you don't understand the energy gradients of where you are, you can't understand what is even possible next So this idea that you can define a state and drag people behind it is just wrong in a situation though Where you haven't got a [00:26:00] defined sort of destination, and you're setting up, say, one or multiple paths to experiment your way to something new. How do you avoid sort of confusion in that? There's several things. First of all, if you, if you look at traditional storytelling, so people who are consultants need to look at what has evolved in humans over centuries. We don't tell us our children's stories about how Janet and John stayed at home, achieved a family purpose statement and had their pocket money based on their KPIs or their OKRs. Oh, I didn't know that. I've implemented that in my house. Introducing family KPIs, that's next on the agenda. What we actually do is all fairy stories tell the negative stories. Every single fairy story you read is a negative story because each generation is sharing its failures. So one of the ways you create alignment is to agree what you don't want to be. And that's much easier to get consensus on than what we do. And that's actually how most [00:27:00] ethics works, by the way. Sorry, my background is philosophy and physics. So you'll see both of those come through. So defining the stories about what we don't want to be opens up the possibilities to actually find places you didn't know in advance you wanted to go to. And that's the purpose in traditional storytelling. We're not trying to restrict kids to what we think they should do, but we're passing on the stories of our own failure so they don't make the same mistakes. So that's one thing you can do. The other thing is, you can't homogenize purpose to the whole company.
Different parts of the company will have different goals, different objectives, different next steps. For sure. Yeah, I mean, a huge amount of people. Don't want to be involved in the corporate. They just want to be told what to do and get their jobs done and get some money so they can go and do what they really want to do outside the company. I mean, it's, it's an OD myth. And by the way, organizational development have a Stockholm syndrome relationship with their CEOs. It's an OD myth that everybody wants to be a highly motivated individual. I think it's the [00:28:00] overburden of Biddy Graham type evangelism on the American management consultants.
Yeah. Everything is meant to be a transformation or a conversion experience. Rather the pragmatic day to day activity in a situation where an organization does want to move to something different, whatever that might look like desperately trying to avoid words like transformation, but once the move in a certain direction and let's say that company has got a relatively decentralized decision making structure or they've tried to, you know, devolve power down into team. Well, then they can't anyway say more on that. Anything explicit will be gained. The minute you write a purpose down or you state a purpose, people will find ways to say they're doing that we're really are very good at that. Yeah, um, on the other hand, if you say if you want, and we can talk about distributed decision making later if you want, that's the big thing we're working on at the moment. Distributed decision making is not about delegating. It's about distributing to role [00:29:00] combinations. So what you really want is lots and lots of small decisions being made with fast feedback loops. Then you can decide which decisions actually with the benefit of hindsight to go in the right direction and give more energy to them.
Yeah. So that's really important because you want to find ways which are sustainable rather than ways which are idealistic. That I agree with. And in terms of this distributed decision making of a series of sort of fast decisions and then rooting out the good ones to set direction.
It's more complex than that.So we evolved to make decisions in sexual pairs, extended families and clans. Now, if you're in a clan. You conform with the clan values is the clan has got to stay together. If you're in the extended family, the maximum number of active decision makers in extended family is seven and groups of seven or less will compromise.
So I'd run a big workshop the other day. I've got doctors, nurses, hospital [00:30:00] administrators, social workers. Nobody will break away from their clan. If I take one person from each clan and put them into small groups, they'll come up with novel ideas. This comes from theoretical biology. It links with Dunbar's number, but it's got a sound. So one thing, and the other thing is we evolved to make decisions in roles, not as individuals. We're really bad individual decision makers, but we're really good in role combinations. And it's why in military environments, in civil defense, in medical environments, people make decisions in role combinations.
A surgical team is a combination of roles with ritualized entry into the roles. So what we do is we identify Let's take a maximum number of five. We identify four roles who are relevant to a decision. The fifth role is then completely anonymous, so nobody knows who it is. That creates a panopticon effect, yeah, because if you're being [00:31:00] observed, but you don't know who the observer is, you have to be honest. And those groups are authorized to spend 1, 000, 5, 000 without actually applying for a grant, provided they record the stories of context, the decision, and the consequence.
Alright. So that way, instead of having to decide where to allocate 50 million, I can allocate 51, 000 amounts to people who can self form teams across silos. And then the real money can follow the things which are working rather than the thing, the people who are going to advocate in what will work. And so you're watching for those teams to then perform at a different level, which will then be funded further. But it's, it's role combination. So if, if somebody starts to succeed, so let me give you a development bank example.
We got one which we're about to experiment with, which is if the village priest. The head person of the village, the oldest girl stood at school and the youngest boy considered a man and an anonymous agent from the bank agree they can spend a thousand dollars on the village and then the real [00:32:00] money can follow the village initiatives which are working rather than the people who are good at getting grants. Right now it's a truism in development sector anybody who's good at getting a grant probably shouldn't be given one and the same is true at a corporate level the people who are good at putting forward ideas to get funding generally are saying something people expect to hear not doing novelty.
Yeah, that's true. So if what I now do is I basically say, look, I've got this amount of money. If you can self form teams and you do this stuff, you can spend the money and then you can see what's working on a pattern of those. That's why we record the narrative continuously. Then you can see the patterns of what's working, and then the sense of direction evolves from what's pragmatic or what's practical, rather than be predetermined.
And how does that happen? So how are you recording the narrative in these cases to drive that kind of those, to identify those points of light? Okay, so there's two things. I mean, we're about to do one end of life decision. For children in hospitals as well. So what we do is we run a one day [00:33:00] simulation in which we put in all the roles which might be relevant.
And we run real scenarios on role combinations so we know what role combination is the most effective or role combinations. So that de risks the program. Then we set up the system. The recording is done in SenseMaker, which is our software. SenseMaker is based on principles of epistemic justice. It allows people to interpret their own stories.
So, for example, a big one we did in the Netherlands with Leiden Institute on old people's homes. A nurse can literally take a picture of a patient, record an oral patient story, write their notes. The patient then interprets it, the nurse interprets it, so we can give real feedback on empathy very straight away.
But I've got quant data explained by stories at scale, and I'm not dependent on text, which is the AI problem. Yeah, right. And we first applied that with the U. S. Army in Afghanistan, where we said to company commanders, you don't have to write a patrol [00:34:00] report if you record narrative continuously while on patrol. So what we then got is we got real time data coming in from human actors to go with traditional sensor data. And that gave us a huge improvement in early warning on improvised explosive devices. Because human beings work abductively. AI works inductively. It's why art comes before language in human evolution.
We use semiotics and abstractions to handle situations for which we have no training data. Yeah. So human beings evolved to handle situations without that. So that's how sense maker works. So you can speak, take a picture. Write something or any combination, you then place it onto a series of triangles, there are other shapes as well. The triangles all have positive qualities, so you don't know what answer the researcher is expecting. And that's quite deliberate, because it creates a cognitive load, it creates an anomaly. [00:35:00] And the only time your brain actually engages in a decision is when there are anomalies So you see you see people figuring out like hard scales.
They just tick the boxes, you know, four four three three If you give them six triangles, it takes them less time to complete but they have to think about the placement in the triangle And so that gives us quant data It feels to me, at least engaging with this stuff for the first time, that it's, it feels like a complex thing to get your head around. And one of the things that something like purpose or setting out end states and things like that, they sort of simplify and allow communication. How in your system is that, is that happening? There's a big difference between simplifying things and making things simplistic. Um, all right. Um, and I say, this is what I call a tyranny of the explicit. It's the desire to remove ambiguity. Human beings actually find ambiguity much simpler to handle. So for example, I'll give you one of the triangles we use in employee satisfaction, instead of saying, [00:36:00] does your manager consult you on a real, you know, scale of one to five, which has a hypothesis. We say, tell a story about what, what would you tell your best friend?
If they were off the job in your company and then people get four triangles and one of the triangle says in this story, the manager was altruistic, assertive, analytical now all of those are positive, so we don't cause any stress. They're describing something that indexing on the positive indexes, but then as a manager. You see, Oh my God, and all analytical assertive. I've got no altruism, so I need more stories like these, fewer stories like those, and that's called a micro nudge. Right. And the trouble is the minute you run a conventional survey or the minute you have a purpose statement, people know what qualities you're looking for. Therefore you can't trust the feedback loop. Yeah, I mean the other thing we do on this is so we'll present the current situation to the whole of the workforce This is described in the you feel good by the way And get everybody to interpret it in [00:37:00] a 10 minute period then we look at the patterns on interpretation What that does is that identifies the 10 percent who've seen a gorilla? If you know the famous one about, you know, 70 percent see a gorilla, 83 percent don't. You want to find people who are seeing the world differently before you determine where you go, because you'll miss opportunities. The nice part in that is that you don't get the corruption associated with, I know what you want, I'm going to give it, like the operations report at the end, you get corruption in that because they've had time to think about it, whereas you're getting the roar up. unadulterated info versus the triangle bit with the manager is they're forced to tell the truth. And it's scaled in its real time. I mean, I remember once working one of the big utilities and I was covert. I was actually doing field ethnography. So everybody thought I was a sort of, you know, apprentice in the, in the yard. And the manager came in and said, okay, we need, you know, it's balanced [00:38:00] scorecard time guys.
This is the score I need this time. So that people believe we're making progress. And this is how you answer the questions. All right. And it's, Oh God. And that's very common. Dave, you were just, in, in, just a few sentences ago, you were talking about lower energy that triggered. Can you elaborate on that?
What do you mean by that? Yeah, so this is 101 physics, alright? And this is also from constructor theory in physics, which we developed a whole set of methods based on Deutsch's work. Is Whatever has the lowest energy gradient is what's going to happen. Yeah, water flows downhill. It doesn't flow uphill and as I say Energy optimization is what evolution is about the reason you don't use your cognitive brain And this is an anomaly is it consumes a huge amount of energy So you don't want to be consuming that energy unless you've really got it Yeah, so one of the things we map for example before you think about the purpose statement is we If you know actor network theory [00:39:00] from the tour Okay, so, you know, basically it's not just human beings who have agency.
So we look at actors who are humans. We look at constraints which can contain things or connect things and then we look at constructors. Constructors is a really important concept in physics, in quantum mechanics. It's something which transforms things, but it doesn't itself change in the act of transformation.
So a software object is a constructor, a process is a constructor. So we map the actors, constraints and constructors, and we place them on a grid between the energy cost of change and the time to change. And of course, anything top right is going to take too much energy and too much time to change. And by the way, that's generally where purpose statements end up when we do this retrospectively. These people get idealistic. And then you focus people not on trying to achieve a goal, but changing the energy cost or the time to change things they want to change. Either to make it more difficult or to [00:40:00] make it easy. So they changed what's called a substrate. So what you want to do is there's no point in having a plan to be the world's best cactus grower, if you live in a, if you live in a paddy field.
So what you first do is you actually say, what's the soil? What's the irrigation? How do we change that? So it's more favorable to what we want to do before you make a commitment to what you want to do. Yeah. And again, you know, we talk about three things, which we talked about so far. One, that's called an affordance map.
What's afforded to me by the environment I am in an assemblage map of the narrative patterns, things which are belief systems, attitudes which are going to influence what people think anyway, which will generally be unarticulated and unstated, they'll just be understood. Yeah, it's the way we do things around here.
And the other is what level of agency do you got? Those are the three things you then try and modify. In different pockets, I assume, because like, it's like the sum of all the we do fractal maps. So we're always gathering things bottom up. So, you know, stimulation bottom up, and then we can [00:41:00] represent the data at the levels of people's management responsibility. Right.
So you never have a company wide program. Yeah. Everybody has got their own map and yeah. Okay. You may have a comment. We all want to move in this direction. Well, where you start from determines where, where you're going to go anyway. Yeah. Sorry. Just as you go through that process with groups, how many realize they've got it completely wrong and need to go in a completely different direction? It feels like you're going, that must be quite a high number because you're going to some very core fundamentals. You can get big major kickback and I'll give you a development sector example. So we did a work, we did work on um, genetic mutilation in Africa and we did another project on Roma conditions in Hungary.
Now these are really intractable problems. If you know anything about the development sector, you know, these are big ones. Yeah. So we use Roma kids. Yeah, as ethnographers to Roma adults, and we use girls who've been [00:42:00] subject to the horror of genital mutilation as ethnographers to people at risk and the adults in their communities. So we didn't have anybody from outside involved in this. Yeah, and that's epistemic justice, right? And then we looked, we got patterns of stories which were interpreted the same way, and we presented those to the experts in Vienna for the Roma project, and in Washington and The Hague for the genital mutilation project. And we said index these stories the way you think the people you're trying to create policy for index them And then we showed them the difference. Yeah, alright. Massive. Massive. Now, there are three reactions to this. The most stupid one is, well, yeah, there's the most stupid, the good one and the evil one, right?
So the stupid one is, oh my god, we need to see it the way they do. Well, yeah, you could if you went and had an horrendous operation and lived in poverty, you might see it the same way. You can't. The evil one was what we got from the anthropologists in Vienna. Well, they're wrong. We're right. We're the experts. We understand their [00:43:00] stories. Wow. Oh, yeah, that's that is actually very common, particularly among social scientists, social scientists do not like this method because they used to be the expert in telling you what your stories mean, right, which is effectively is that is that the is that the difference between kind of top like the top down analysis of like, I'm going to frame this.
I brought it. It's actually okay to frame it. It's you want the data to be authentic to the people who are telling the story. He wanted to be quantitative, not qualitative. Yeah, it's quantum matters, right? But then you can top down and say, I don't want that sort of story, I'd like these sort of stories.
That's okay. So the whole top down, bottom up triangles, they don't work. You want something which is much more messy and entangled.
Let's take that then and just try and summarize from an organizational perspective, I think where we're up to. Which is using a series of bottom up conversations to draw out the narrative for varying different teams and then see which of those teams [00:44:00] are performing at a higher level and then increase the sponsorship of those teams. And that creates an emergent direction. The bit that I'm not getting is, like, who's setting direction in the absence of Northstar? Now let's, let's look at three things, alright? So, you need to map three things. One is you need to see what's even possible to change. So don't start off with a workshop where you decide what you would like to be start off with what the hell can we change? Anyway, okay that's called affordance mapping secondly You need to understand the attitudes and beliefs and that's the day to day stories of the water cooler or the school gate It's not what people tell you in a survey or tell you in a focus group because they will be game in the response. And the third thing is how do you determine agency?
The process I talked about, about role based groups of four or six roles with one anonymous role to make it up to five or seven. That's a way of saying, [00:45:00] well, we got no idea what we need at all, or we've got limited ideas. So if you can assemble one of these teams, you can spend this much money provided you give us feedback. And then we'll decide where the real investment comes. So that's more intervention side. But what you're doing is you're saying, where are we? What can we do next? What are the constraints? What are the constructors? Yeah. And by the way, we're now making that available for individuals. So I did one with a big bank in Latin America recently, and the executives went away that night, came back, having worked it out for themselves the next day. Correct. So rather than talking about inner and outer and subconscious and, you know, in a purpose, which is bad science, by the way, there isn't such a thing as a subconscious. What they did is they said, what's, what's the environment in which I'm working and what can I change and how will that then change me? Yeah, so it and I think the thing is what we're coming from now on on modern science is it's it's we've had this very Linear causality you draw and it came [00:46:00] from systems thinking you drive a system by deciding your goals. And my point is that is a priori impossible if the system is a complex one. So all you're going to do is pervert the system.
By what means though then do you change the direction of a business? More stories like this, fewer stories like those. Right. Right. So you're literally skimming from your teams a series of things and just say, like, more of that, guys, less of that. Yeah. I mean, sometimes you have to make big decisions. I mean, I had to make a big one in our business the other day. So sometimes you just have to do it. All right. And say, well, we're not going to do that anymore. We're going to do this. All right. But you don't delude yourself. Yeah. That that's some sort of democratic purpose, it is. How does that come up to me as a, as a decision maker then? Does that, is it like a load of sound bites?
What does it actually look like in terms of like data? If you're doing things properly, you don't have to do it very often. Hmm. Hmm. Because And yeah, I mean, what we [00:47:00] do, for example, is like we did with the American army, you can say to people, you don't have to do reports anymore, provided you keep continuous narrative capture, because what I don't want is after action review or lessons learning, because people don't remember things the way they happened. I want real time capture. So I save people time. Right.
Yeah, now that means I've got three benefits. First of all, I'm seeing the stories as they come in continuously so I can look at customer interaction stories. I can look at internal politics and I can see patterns in the stories, which will tell me what's going on much better than consultation, right?
So that's one benefit. The other benefit is I'm building a network that I can say, Okay. If we did this, what do you all think and give people half an hour to respond so nobody can talk with anybody else All right And that that means you create a normal distribution rather than the Pareto distribution on the results And that can show me dominant views and [00:48:00] minority views and i'll find outliers Yeah, and the outliers are the people I go to these people have seen something. Everybody else is missing. I need to talk to them Right. Right. And then the third thing is sometimes you just have to make a call. That's what leadership is about. For sure.
How does that relate to trust? Um, because if you, you know, if you want people to be open in whatever they think, you know, do you see them struggle with that?
No, trust is an emergent property. Yeah. If you, if you say to people, how do we create more stories like this, fewer stories like that in your workforce, trust will emerge from that. Yeah. If you, if you say we need to be this sort of company, the political game players will dominate and everybody knows the political game players are dominating.
So they get pissed off and they trust you less. All right. It's like when we teach executives, corporate communication, we don't, we We break powerpoint completely we give them templates and they actually fill the template with anecdotes of their own real experience And then we take them through five or six iterations [00:49:00] Yeah, that it sounds like a coherent story and they can replace the anecdote so we make it authentic If we built a narrative database, they can use other people's stories.
Yeah, they're not making up something fictional about where the future should be They're saying we need you know, this is really happening. Can we do more of this? Yeah I mean, that's going to generate trust in this model.
What is the leadership conversation then? So how the decision presumably of I want them to be doing more of that versus more of that, that remains a leadership directional conversation? Yeah. So for example, we'll play around with grids using the data we've got. And then people may say on the grid, I'd really like the dominant pattern to be up here. Right. Yeah. So that's cool. So right at your level, what would you do to do that? So you're, you're actually changing the system by micro engagements. Now, that doesn't mean every now and then you don't have to do a macro change that happens, but that's where leadership really comes into [00:50:00] play. And to be honest, you're, you know, you might as well do something at random because it's as likely to work as anything else.
So what is the, what is the pace of change through a, through a process like this? Is it something that ekes out over? No, it's actually fast and early because If, if you're gathering stories continuously, you're using, you know, metadata, then you find the 17 percent before they talk with the 83%. Right because if they talk with the 83%, they'll conform. People don't like to be different.
If I distribute decision making, things will happen, which I didn't expect would work, but now work so I can respond to them. So I'm trying to create serendipity and surprise. Yeah, I see continuously at the level where I can handle the risk. And presumably the leaders in these organizations that are going through this who are commun need to be communicating quite loudly.
These are the, these are the things we'd like more of. [00:51:00] No, they could distribute that, or they could actually ask people to come up with it and tick a box. Oh, right. Yeah. I mean, we, we can represent it back. I mean, the, the whole point is, I mean, one of the ways you sell it to executives is, first of all, you spend too much time making decisions.
Not enough time thinking, right? So, if we do this, and so let's do a couple of things. First of all, let's map what's actually possible to change. Yeah, that's a two half day workshop. Key players, yeah. Why wouldn't you do that? Because it's going to reduce the risk and cost of change. Right. Yeah. It took me a lot of time to get it to two off days, but we're there. Right. So first they do the map, the second they do the actions with an overnight, that turns out to be key. And then you can say, look, we can launch this. We can find out what people really believe without them knowing what we're looking for. Now, once I've got that, let's sit down and have a conversation about what we can do now.
Yeah. And it may well be things are already happening. You're not aware of it. I [00:52:00] mean, the biggest threats to a company. Use the uncertainty matrix unknowable unknowns and it's increasingly important to architect an organization to handle those, but they're really significant. One of the unknown knowns the things which are known in your organization, but not known to you. So this is a way of flushing that home. The bit that I'm, I'm just, I've probably been far too literal here, but I'm trying to get my head around, like, if I was like a leader in one of the organizations that you describe, how am I actually taking in so much information because it sounds like Oh, it's easy. No, it's, it's a doddle. What does that look like? You're using visualizations. We show you fitness landscapes, which are like contour maps.
Yeah. And they show dense attitudes and weak attitudes. Yeah. And you, we've done a lot of work to read. The analysts can create the frameworks, but you don't depend on the analyst to tell you what they mean. So for example, the, one of the big problems with employee satisfaction surveys is the consultants who create them spend a lot of money on [00:53:00] interpreting it. Executives don't know what it means. We gather stories at scale. They look at a pattern in the story. They click on the model, they read the story.Yeah, that way they can engage with the problem when you zoom right out of it. Dave is what you're saying that instead of treating an organization like this sort of malleable thing that you can point in whatever direction that you want and put like a little tag on the front of it to say that we're going to do X, Y and Z for the world. And then everybody will either believe that they're going to game it anyway. Instead of treating an organization like that and an org chart, you're You're are you saying treat it like a just treat it like a collection of human beings out of which you're going to be able to derive Yes, no, you're you're treating.
I mean first of all you need a hierarchy If you don't have a hierarchy, the alpha males will create one, or if it's really scary, the alpha females will, and you don't want that. Right. So you want to design it, [00:54:00] right? But then you need to build informal networks. So, for example, we use trios for that. So, for example, we use Young Coder with systems analysts with user trained to talk to IT peopleand instead of sending out systems analysts to interview people, we'll throw 20 of those trios at a problem. And by the way, it's easier to train users to talk to IT people than the other way around. Now that gets richer data than the analyst, it costs less money, but we're also building networks between users and IT people, which wouldn't otherwise exist.
So, informal networks are more powerful than formal systems in decision making, particularly in a crisis, so we deliberately stimulate their formation. So you need the formal system, the informal system, and you need to constantly map the energy gradients in the system and life then becomes a lot simpler as an executive. If you've got, if everybody is within, I mean, we can get, we can achieve this in 18 months. We can have it so that anybody through a [00:55:00] trusted network. Can get access to the ceo. Yeah, so, you know, I note i'm within three degrees of separation of the ceo So I can convince this guy already knows me likes me that and then i'll get to the ceo That's a healthy ecosystem, right? I say yeah and then if you understand what's really pragmatic and you can engage people in micro changes You're dealing with a complex ecosystem and it's not just the people. It's the processes the physical building Yeah, it's the environment people live in. It's where they go home at night. I mean, we've been doing work with one of the big oil companies because nobody wants to work for them.
Nobody wants to go to the pub and say I work for big oil. All right.
Yeah. So how do you manage that? And that's actually been a really interesting project because what we did is we presented the greenwash statement And we presented the greenpeace statement and we asked employees to say which they believe most anonymously and look at the results Wow and one of the things which came out of that I need [00:56:00] to be careful what I say at the moment is you need to do Things you need you should stop talking about what you want to do and start doing it things. Yeah. So one of the things I'm hoping they'll do on that is release 10 percent of their engineer's time to work on global warming.
Right. Yeah. Because for example, refreezing the poles is an engineering problem, not a science problem. It gives 10 years breathing space. So you know, actually that would cost you less than these big marketing campaigns and lobbying campaigns, but it's, again, you're changing the substrate.
So that the things you want will take less energy. That's a, that's, that's the big paradigm shift we're going through at the moment. That's the systems thinking to complexity science shift, which is as big as scientific management to systems thinking was in the eighties and maybe just to bring us to a bit of a close for this section of the conversation, at least have you got an example or a case study of an organization that you've worked with at scale that's gone through this process and what kind of results it's been getting?[00:57:00] There's loads on the website if you're going to have a look, but you won't find a complete recipe. What you'll find is elements. It's like the work we're doing with Hexi at the moment is breaking everything down into small parts. So the way you scale a complex system is you break it down to the lowest level of granularity, then you recombine.
You don't imitate or copy. Yeah, so that's where I like the chef can recombine ingredients. They don't have to follow a recipe. So you'll find examples of all the things I've talked about there. Uh, the distributed decision making, that's really new. We're running that experimentally at the moment. And just to tell you how we do things, we, we don't start with empirical empiricism.
Yeah. So we, I mean, that's a fundamental error because I mean, sorry, this is my physics background. We always used to say social scientists suffer from physics envy, which is a deliberate play on words. They never have enough data to form any conclusion. The management consultants even less. Yeah, right. So what we actually do is we play around with natural science, physics, [00:58:00] biology, biological end of anthropology. And there's one day where it all kind of like comes together and we can see there's something we can do as a method or a tool. That normally takes five or six years. So that's like the theoretical physics, then it goes to the experimental physicists, who are the ones whose math wasn't good enough to be a theoretical physicist. Sorry, that's no prejudice. And they play around with it for four or five years. Then when it's boring, we give it to the engineers. So that's a different approach to develop methods and tools. And it handles uncertainty because natural science is subject to peer review and experiments, social science and management scientists.
And it's like, you know, there's a major crisis in psychology at the moment, which given that all OD is based on current psychology, it's called the crisis of repeatability. Nobody can repeat the classic experiments. Why is that? Because there are too many variables. I mean, remember I said they suffer from physics envy? Yeah. They reduce the variables so they can produce correlations. [00:59:00] Yeah. You, you can't have a correlation in a complex system. Mm-hmm. Right. Or if you have a correlation, it'll be misleading. And the ones I always give as examples, the one is if, you know, if the country wants to increase the number of Nobel prizes, it wins. It doesn't need an educational system. All it needs is for more people to eat dark chocolate. Because dark chocolate consumption per head of population directly correlates with Nobel Prizes for head of population for the last 50 years. That is a bigger data set than anything you'll ever see in any social science paper.
And the one which I believe is causal is peaks in attempts to commit suicide by drowning. Directly Nicolas Cage movies, but I can see a reason for that. There's a, there's a guy on the internet who tracks that, that, that, those. Oh, there's a whole false correlation website. It's brilliant. It's absolutely outstanding with things like that. But how many, how many agile presentations, how many social [01:00:00] scientists, how many management science and management consultants, they go and interview a bunch of executives, and from that they produce a framework. Now you can't trust what executives say about the company anyway. Every time I've done ethnography, the reality doesn't match what the executive thinks is happening.
And from that, they create a recipe. I mean, that's just fraud.
Ez, have you been looking at anything in this space? Well, 40 next year. Uh, yes, uh, I know, uh, probably the other people in this room are. I've, I've just turned 70, so you've got nothing on me. Well, and I don't mind getting older. I even experience it as something good, you know, you're getting more relaxed and, you know, a lot of things, you know. slide off your back more easily. And in the past 10 years, I've been quite some soul searching on different levels, trying to find my own purpose now [01:01:00] that we've already talked about it. And I was looking into the, the book, uh, about Ikigai. I don't know if you know it, it's the Japanese secret to a long and happy life.
And I was reading. through it. And I was actually thinking about it. I'm very curious about Dave's perspective on this, especially knowing what he just already talked about. What if you just change the you into the organization? Like it's the sum of all parts. If I do what I love, I don't know if you to, to talk about Iggy guy, it's, it's like the, the combination of doing what I love doing, what I'm good at, like a profession.
Um, combining it with the world needs and what you can actually get paid for. If you have that combination, you can find your IKIGAI, uh, and it has obviously some leading guiding principles. You've got several problems. One is it's very culturally specific. Yeah. Okay. Yeah. All right. So even if I come into Europe, there is a North, what I call North Atlantic culture, which is Northern Europe, North America, which is very much focused on [01:02:00] individuals and individual change. Yeah. Whereas Southern Europe, Celtic fringe of Britain, Africa, Asia are communitarian, they're collectively based. Japan, Japan is very, very collectivist, you're defined by your interactions and your social status. Yeah. So that is not going to transfer into an American or Northern European context. The other point is that, sorry, I mean, it's a lovely idea, but, um, most of the time you've got to work to earn money.
And there's a lot of myths around Japan, so they do have lifetime employment, but it's only for a very small number of people. Yeah, and I think the danger is, and that's why we say you break things down, you rebuild them. So lifetime employment for key people, and I'll give you an illustration, when we did the turnaround on data sciences, so we were Down to a penny a share.
We couldn't pay the wages. Yeah, it was difficult brought in a year and five of us went to the vcs and told them the truth and next day we were the only five Left of the original bioteam. Yeah, we then did a [01:03:00] huge turnaround on that. All right, and that was driven by an executive Yeah, who actually understand to do it?
So he reduced everybody to tears and then picked six of us and said You're guaranteed a job. Yeah, whatever happens to the company goes under, you know, I'll take you with me. You don't have to worry about that anymore. And then we got really innovative. One of the things I really enjoyed, we had to lay off a third of the workforce.Sorry. I've lived through recessions. Yeah. Yeah. Agile people have never lived through a recession.It's where it's happening. It's quite amusing. They got no idea how to bloody manage it. All right. So either way, so we had to lay people off and it was a cultural issue. We wanted people to work together. So what we did is we declared a rat infestation of the building and closed it down over Easter with an extra day.
And what we did, just me and Mike worked on this. We removed every wall in the building. [01:04:00] Yeah. So when people came in on the Wednesday, there were no walls left. We hadn't moved their furniture or their filing. Oh, it's big open plan one, but it was all temporary. It was all, you know, when those structural walls, right. And then we actually observed who rebuilt walls with filing cabinets or those, those are the ones we fired. Now we've got HR to write it up in different terms, but that's an example of using the ecology to dispose things. Yeah, rather than trying to work it out in advance, we basically set up the environment so that people who hadn't got the culture would be visible very quickly.
That reminds me of that episode of The Office where, uh, Gareth is trying to, you know, kind of stack things to stop Tim, like, putting stuff on the edge of his desk. Exactly that. So, Ez, did that help you out? Yeah, definitely. And, um, as I hear Dave talk, I'm actually curious, it might be a personal question, but do you feel like you find your Ikigai?[01:05:00] Um, when I'm in the mountains, that's my Ikigai. So, I'm Welsh, alright? So, as far as Welsh is concerned, it's, it's, um, it's the place where you feel you belong. Right. Yeah? And there's a Welsh word, hirith, which the English translates as nostalgia, but it doesn't mean that. It means a desire to return to a place which probably never really existed the way you remembered it, but you want to go back there. Wow. Rose tinted glasses, you mean? No, not rose tinted, it's classic bloody English, alright? Yeah, missing the poetry of the concept. Sorry. But this is the other point, is languages are very different, alright? So, in Swedish, there's no difference between efficiency and effectiveness. Oh, that's interesting.
Yeah, so they have to qualify it. They they use the same language. All right. Yeah, there are lots of you know Welsh has hiriath, it has huil, it has kenefin. All right, which doesn't translate as habitat, right? And [01:06:00] and language is a key part of culture. So You can it's like hygge. You can't translate hygge from Denmark. Yeah, the welsh equivalent is huil But that means something very different. Yeah So I think the danger is people are looking it's the patent medicine market people love somebody who sells patent medicine Yeah. So here it is the magical mystery, you know, take this and everything will be all right with the world. All right. And the only solution to that, the Americans worked out a long time ago. It's called tiring and feathering
to build on his question, though. I wonder how, if you're connected is in the mountains, you're very, you clearly a very motivated individual when it comes to kind of your work life. How have you brought meaning to that?
That's because I enjoy what I do, alright? I mean, I've always done what I wanted to do and to hell with the consequences. Alright? And I've got away with it because I've had good managers. And my sister, who is probably brighter than I am, she got a better degree, she just wants to work 9 till 5, she'll do overtime [01:07:00] if you pay her. She doesn't want to be motivated. Her motivation comes from things she does with friends in private. And this idea that there's one universal framework is just wrong. Different people work in different ways. I'm lucky. I took a lot of risks and I've only ever had one bad manager and he wasn't very bad.
Yeah. And everybody else gave me room to experiment and gave me top cover. Different people work in different ways and different people are satisfied by different things, you know, for me, I'm never going to retire. If I retire, I'll die within six months. I know that I want to do more work and I want to do another Camino and so on, but not engaging with developing new ideas is just alien.
Whereas other people just want to retire and do other things. So recognize differences. Well, look, what a wonderful note to end today's discussion on. Dave, it is always a pleasure and a challenge to talk to you, so thank you so much for spending time with us. Pleasure. We end every episode of this podcast by asking our guests what they're excited about doing next, and that could be, [01:08:00] uh, you got a walking holiday coming up, or it could be something in your professional life, or a bit of both. So Dave, what are you excited about doing next?
It's probably three things. One is we're doing, we're now developing a distributed decision making to the point where it can go into engineering. And we know, for example, in the NHS, we could take out 50 percent of the bureaucratic cost of a hospital if it works. Oh, wow. That's quite substantial.
Yeah. And the same in hospitals. So we can have decisions made now rather than after three weeks of filling out forms and making arguments. And we're looking at things like end of life decision for children. So that one's got me really excited. I think the other big thing is we're, it's our 21st anniversary as a company this year. Oh, congratulations. And we've managed to build a software product without any investment or any loans. So that's not many people can claim that. So we've got a big retreat coming up in April, in March, um, in a Rory [01:09:00] or Snowdonia as the English call it. All right. The Welsh name is a Rory. And that's looking at how people exist with each other and with the planet and with the world of the environment.
And then we're going to do two or three days of walking after that around my 71st birthday. So that's the sort of second thing I'm excited by. And then the big one is the one we're now working on, which is Children of the World Project. So we've done this in three countries. So we proved the concept, but we're now looking for funding.
Because we want every 16 year old in every school in the world to be an ethnographer into their community every week. So instead of relying on trying to have algorithms tell you what's true or false on the internet, we change the input so we can rely on the input. And then we can engage young people in how do we create more stories like these and fewer stories like that at scale with government Right and that we're now looking for if I get that right, that's my equivalent of jimmy welsh creating wikipedia.
Cool Wonderful. Well, [01:10:00] very good luck with all of that. And again, it's been a real pleasure talking to you today If you would like to discuss any of the issues on this week's show and how they might impact you and your business, please get in touch with us at cloudrealitiesa@capgemini. com. We're all on Blue Sky and LinkedIn.
We'd love to hear from you. So feel free to connect and DM if you have questions for the show to tackle. And of course, please rate and subscribe to our podcast. It really helps us improve the show. A huge thanks to Dave Snowden, our sound and editing wizards Ben and Louis, our producer Marshall and of course to all our listeners.
See you in another reality next week. Next [01:11:00] week.