Exploring how humans connect and get stuff done together, with Dan Hammond and Pia Lee from Squadify.
We need groups of humans to help navigate the world of opportunities and challenges, but we don't always work together effectively. This podcast tackles questions such as "What makes a rockstar team?" "How can we work from anywhere?" "What part does connection play in today's world?"
You'll also hear the thoughts and views of those who are running and leading teams across the world.
[00:00:00] Dan: As you know, this podcast is all about how humans connect to get stuff done together. But if you were to believe the hype around ai, you could be mistaken for thinking that humans won't be doing anything for very much longer. Are we all going to be replaced and Theis will just talk to each other and do all the work? Are humans about to be cut out?
[00:00:18] Dan: It's all hyperbole, of course, but it does leave us with the challenge for teams trying to figure out how to navigate a very unclear future. Here to bring some clarity and some sanity is Suzi O'Neill, consultant, author, and speaker on Frontier Technology. Spoiler alert, the future is not as apocalyptic as it might sound.
[00:00:44] Dan: Hello and welcome back to We, not Me, the podcast where we explore how humans connect to get stuff done together. I'm Dan Hammond
[00:00:52] Pia: And I am PLE Don Hammond. How are you doing?
[00:00:55] Dan: well thank you. Friday morning. Friday night for you.
[00:00:58] Pia: Friday nights pizza Night fires on,
[00:01:00] Dan: and you are on duty.
[00:01:01] Pia: Oh, well I'm on duty, but they, but they are making it, so that's good.
[00:01:05] Pia: I'll go and check the fires going.
[00:01:07] Dan: That's so nice. That's so nice. It's, yeah, it's a slight disconnecting thing, but it's a wonderful for us, isn't it? You're a d Totally different, different vibe, but it's gr amazing, um, so this show today, um, I feel like it's almost about the topic of the moment, obviously, which is ai. Could look at, oh God, there's another podcast about ai. But I, I feel our guest today, Suzy O'Neill, with her deep experience, um, of so many fields, including now looking at the adoption of ai, um, I think she's gonna really be able to shed some light on, on matters for us.
[00:01:40] Dan: It's very easy to get stuck in the. In the detail of what's gonna happen, looking at the macro picture or whatever. But the key is to actually think about us as humans and what we're actually gonna do about it. So, let's just, why don't we dive straight into that and then you can get off and, um, make those pizzas.
[00:01:59] Pia: a really warm welcome to We, not me, Suzi. Very excited to have you on the show today
[00:02:05] Susi: Great pleasure.
[00:02:05] Pia: And, um, and a big topic. To talk about AI today and, uh, impact on humans. So , a really good one for, for us to get stuck into Before that, um, it's the customary cards exercise. So I'll hand you over to, um, to Dan for that. Is it green, is it orange or is it red?
[00:02:27] Dan: It's red and it's, it's this, so we have three grades, Suzi one. The green ones are quite easy. It's for different for teams to get to know each other, and the reds are sort of getting to know each other quite well. And this one, I dunno how you feel about this. I instantly take against people who.
[00:02:42] Susi: I'm trying to adopt a bit of zeal in my life and tolerance because I think it's important to, and I interact with people on all aspects of different political spectrums particularly, so it's very easy to say this. Conflicts with my view of the world, but the more you understand it, the better it is. I mean obviously there's lots of people politically I disagree with in the world. Um, but the only one I'd probably say like deserves to be put in the bin right now in this country is Nigel Farra. So. Um, but still, if people believe in him and wanna vote for him, I think it's important to have a, a tolerance and a dialogue. And that's how we can make change, not by standing against
[00:03:21] Susi: things, but with people.
[00:03:23] Dan: I do think there's a difference, isn't there, between the, the willful architects of what's happening now and the unfortunate people who believe what they say.
[00:03:31] Susi: the world we are living in. You know that this is the whole way AI and disinformation is spreading. It's not necessarily people's fault for believing in certain paths or tropes to a certain extent, we are all. Subject to our own bubbles. And that's why I think it's in quite important for me, it's just, I was mentioned before this, I just went to a, a local business event thinking, oh, it's not really, I'm not gonna meet people, uh, very relevant. I actually met loads of people that are relevant 'cause we have something in common. We both live in the same geographic area. We all business owners that face the same challenges. So it's important to like just get out of your bubbles
[00:04:01] Dan: Yes, definitely. And they're all humans, you, yeah. Yeah, exactly. Which we'll come back to, I'm sure. Um, so Suzi, um, it's, it is great to, great to see you. Um, give us a little bio in a box. Could you, how'd you get to this
[00:04:13] Susi: Yeah, I've done many things, uh, down and pier, but right now my focus in life is I see the, the new technologies, AI and other emerging tech coming out there and it's getting a lot of hype, but what I see is that the business implementation isn't really working. And that's because a lot of organizations are thinking the tech will get rid of people.
[00:04:34] Susi: Pass us pesky humans. But actually it's the people that make tech. Um, so I'm helping organizations with what I would call inclusive ai. So that's helping them with adoption, not not telling them how to do prompting or tools, but thinking about what the human perspective means. And inclusive AI also means getting everyone in the organization at every grade level from different demographic backgrounds.
[00:04:57] Susi: Raising all boats so everybody can benefit. And there's a lot of research we can dive into about why we've got a lot of disparity. So specifically it's about helping people in the workplace make AI and other emerging techs work better for, for them as individuals, them as teams, and them as organizations.
[00:05:13] Pia: so what are you noticing about how well people understand ai when you are first engaging,
[00:05:19] Susi: Yeah, that's a fascinating story, boy, because. a part of the reason I think we've got a challenge is AI seen as a very nebulous thing. I mean, in the broad terms, it can mean anything from machine learning to natural language processing, generative AI automation, machine processes. So depending what industry you're in, it's very different.
[00:05:38] Susi: But, but going right back, I've been doing some talks. Um, I work with schools where I've gone into schools and talked to 'em about my. My wibbly wobbly career path from being an arts graduate to working in technology, marketing and communications. And what was fascinating with the very young ones was I said, I worked with a thing called AI artificial intelligence. I use this as part of a sneaky research. And I said, what do you think that means?
[00:06:00] Susi: Now, the eight and nine year olds would say. Then I would challenge them a bit and say, I pick up my phone and say, what about in a phone? Could you have AI in a phone? And the 10 and the 11 year olds were like, yes, you could. And then they'd say, and my mum helps me use chat GPT to do my homework. So as we get the, the challenges already in, in schools and how we interact with these technologies.
[00:06:21] Susi: So I think there's an understanding even at very young age. But what I found a little bit dystopian about the visit to the primary school was that when I said to them, how do you think this is gonna affect you and when you go into work? And they were already saying, well, it might. Cut our jobs and I said, well, what about the jobs that don't exist yet?
[00:06:39] Susi: So I thought it was a little bit dystopian that by the age of 10 and 11 kids were worried about losing a job to a technology when they're a long way from starting work for jobs that as we know, like in 10 years in the future, we'll have completely different sets of jobs. So what I do around concept marketing and AI strategy. Just didn't, you know, it didn't exist when I studied at university.
[00:07:00] Susi: So there's something there around, we need to be working at very young age, uh, with the schools and the colleges to build positive engagement with technology. There's some hopefully, touch wood, good, good momentum going on now with like education and tutoring tools that stop like AI being a homework machine.
[00:07:18] Susi: But it's, it's, it's, it is really at that point, um, we need to start that education and that AI literacy at the very young age. And in China, they're doing it in mandating in Beijing, in schools at the age, at those primary school ages.
[00:07:27] Susi: So we have to play catch up when we come into the workplace again. I think it's about breaking it down. So rather than saying general statements, and a lot of CEOs are saying, do ai, you know, use the tools. If you don't use them, you're not getting any more resources or budget. And that's the way it is. But that's quite frightening for people, particularly if you're not already in a, in a tech or tech leaning adoption role. Because what the hell does it mean? Do ai? I mean, again, it's too, too vague.
[00:07:53] Susi: So when I've worked with teams, I've broken it down and said, well, we've got access to this tool, which might be, unfortunately Microsoft go buy a lot, so I want you to do. 10 hours in the next month, block it in your calendar. So it's there as dedicated training time to go through these tutorials and then think about what tasks you can start to do experiments with. So providing some very strong guidance and then psychological safety.
[00:08:16] Susi: That's really basic. Ideally, we want to have like a much, um, more sophisticated training layer, um, and working with teams to, to look at their use cases and have consultants and the like, come in and help teams with that.
[00:08:28] Susi: The further upstream as an organization and as an individual, you can get and say, yes, I want to learn these goals and do it, but what am I actually trying to achieve? What is the end goal? And it might be just playing around and experimenting and seeing what works and what doesn't work. What it might be more tactical once you've done that experiment and saying, okay, I can see we have these very repetitive tasks that I as an individual or we as a team do. How can we start building in a layer of automation or a layer of insight?
[00:08:56] Susi: And that's like, for me, one of the big gaps we have with ai. Everyone thinks this message is productivity, saving money, cutting jobs. That's the worst use of ai. 'cause everybody's got access to the same tools and they've all got the same ideas. And frankly, it's all, it's all a bit of baloney, all the, you know, saving 20 hours a week. Well, I mean. Every study will say, you know, find a better document finder, saves you 10 hours a week, this will save you 20. If I added up all the surveys that told me something, what time it saved me in a week, it would add up to more than the, the actual hours, let alone the working hours. It would add up to like a hundreds hours a week, I would save by buying a few tools.
[00:09:30] Susi: So we know that, that that technology isn't the answer. It is always that layer of, of adding in, like what's the human insight. And also understanding that. We're all different as individuals. Our tasks are different in teams as knowledge workers, we are not factory line workers, so those jobs have already been made way more efficient by automation already. So they're already getting all the benefits from, from automation and ai. Now, the knowledge workers is coming for us, but our workers sticky and complicated. So it's not a push button solution, and that's what particularly the, the business leaders need to get their head around, like, don't believe all that, the hype, the AI companies are trying to sell licenses. They've invested hundreds of billions in data centers, in hardware, in software and training. They need that money back, right? And they need it from you. So don't believe their hype work out your own reality.
[00:10:19] Dan: Week ago, we were at, we had a meeting in Milan where we were talking about AI and um, one of the questions that we were knocking around, but I'd really be interested in your thoughts on this previous technology transformations, even if we think about the internet when we're all surfing, promise to do the same thing.
[00:10:34] Dan: I mean, we're far more efficient now than we used to do. We're not dictating memos to someone and then marking it up and then, you know, signing it. We're flying along.
[00:10:44] Dan: Those previous revolutions have actually not reduced the amount. We, we haven't said, oh, great, I can get that 20%. I'm gonna reduce my workforce by 20%. 'Cause they can all, we actually say, oh, fantastic. We're gonna use that 20% for more human effort.
[00:10:57] Susi: That, that's where the narrative needs to change. Interestingly, Dan, I started my career as a student doing that copy, editing, copy typing of the audio using a software called Word Perfect. I'm showing my age now.
[00:11:08] Dan: remember it.
[00:11:09] Susi: I remember it would come back and the uh, uh, quantity surveyor would mark it up with red pen with all my spelling mistakes and I don't know anything about quantity survey.
[00:11:17] Susi: You know, I was just a student, so like I'm really glad that job. Provided me with income when I was studying. Um, but. Is it good that that job doesn't exist? Is it bad? Well, actually probably students are doing way more interesting side, side hustle summer jobs than than that now. Um, so for me, for the waves I've been through, so I started my career building HTML websites in the days when geo cities was cool. We were working for record labels and we were like trying to communicate to fans in a new way instead of through mailing list. That and cards and the post and the radio and pluggers. So it was more a direct relationship with the fans.
[00:11:55] Susi: And then, uh, web 2.0, social media came along and we had that started to have that two-way engagement, which again, for, for people like media artists, educators was brilliant, but it was terrifying for corporates who were suddenly getting like Twitter messages for people complaining about how, how, how bad their bank is.
[00:12:11] Susi: Um. So, but we've, we've adapted and, and adapted for that. But it seemed like there was a wave of opportunity. It felt like a wild west those times, but we felt it was exciting to be part of it. Where I feel now less excited to be part of this world is is this relentless focus on, on productivity and money. It's is pure capitalism and. The hype that we get from people like Sam Altman from Open ai. The, one of the, the most annoying statements he made, which I really has really hit my economy of marketing, was saying last year that 95% of marketing jobs will be eradicated with ai.
[00:12:47] Susi: But that just shows me naivety. But we all, who, who work in marketing know the time you spend doing the production or the copywriting or the design is actually a fraction. A lot of it's about insight gathering, stakeholder feedback, learning. And yes, we can automate more of it. Are we gonna go end up with brilliant brand and advertising campaigns with synthetic research, with synthetic images, with synthetic copy? Okay. For some things it might be okay, but that's not gonna be how you build hearts and minds and build a strong brand. So differentiation of not using AI at times is gonna become premium and valuable, particularly for, for consumer brands and high-end B2B brands.
[00:13:26] Susi: Um, but like. We just need to like switch off the, the noise and the hype bit. So, so I run a newsletter called Rethinking the Hype Cycle, and I absorb a lot of the hype and the pain, the dystopia and the utopia, so you don't have to, and then try and distill that every, every couple of weeks into a, a set of like trends and insights and, and, and other pieces that I produce. So. I'm exposed to a lot of it, but where I feel there are moments where I get, um, a jarring is like when I spend a bit of time, um, as I do in, in Greece with, with older people, and you suddenly realize, like most businesses being done, like, I'll phone my mate who's a taxi driver and I'll phone the takeaway and then this person will come round.
[00:14:06] Susi: Then I, even in the digital transformation space, people of certain ages there and I went to a brilliant conference, a web standards conference, and someone did a, an informal pet presentation about. Technology called IVP six, which I dunno what it is, it's a web standard, but apparently it was made 13 years ago.
[00:14:23] Susi: Businesses are still using the stand before now. Anyone who's worked in like software Legacy knows legacy software is a problem for developers. But they were ranting and raving going, why don't they realize the benefits of Iiv P six? And I, I love this, this event, 'cause I just thought I live in a will sometimes where there's talk about.
[00:14:41] Susi: A GI, artificial general intelligence, robots coming and taking over physical work even, let alone the knowledge work that we're already talking about being, being crushed. Um, um, and yeah, there's a whole industries and waves of work out there that are not even, not even very far along their digital transformation.
[00:15:01] Susi: So it's more important, again, like as organizations, if you're working in the high tech, high velocity. Silicon Valley type of space, you need to be on top of all the trends and you need to be moving fast, fast, fast. But if you are in any other industry, and I I warrant, that's probably like 99%, 99.9% of all other work and probably a lot of listeners to this podcast.
[00:15:20] Susi: You don't need to move that fast. It, it's just you need to move. Move at a steady pace. So I call that weight, which is working on AI transformation weight. Uh, as long as you're moving and you're thinking consciously about what you're trying to do as an organization or as an individual, if you are a solopreneur, then you are, then you're making good progress.
[00:15:38] Pia: Think too that some of these sort of like, you know, the Sam Alton statement about 95% of marketers being made redundant. I think some of that's also just meant to scare us. I, I'm left with the sort of a, sort of simple economics, well, how'd you get, how'd you get your economies to run if you haven't got enough people paying tax? ' Cause AI doesn't pay tax. I mean, you, you know, if you're, if you're going to get rid of 30,? 40, 50% of knowledge workers, where do they go?
[00:16:06] Susi: But these discussions have been a centuries old, and I think the research I've done, even in the waves, computing waves of the sixties, seventies, eighties, pre ai, these sort of discussions about automation efficiency have always been there. I mean, um, Keens, the, the, the economists back in, was it in the 1930s, he said, well, his futurology vision was we'll all be working a 15 hour week.
[00:16:30] Susi: Who's working a 15 hour week in, in any economy in the world, we're working more hours than ever before because that's not how our capitalism works. You know, we fill the time with more, more productivity and more growth and more things to do. There is some examples that that AI isn't necessarily giving that even, even the time efficiency, it's being sold back, but it's creating more busy work because maybe we do things differently. We have, we're increasingly moving into smaller teams, but that's where AI can help be emphasis on can, because the skill skills you don't have, like I have to do some direct marketing sometimes and, and sales and it's not my strong suite 'cause I'm a comms person, but I can plug in plans, I can chat back and forth with my AI agent and get some, something a bit stronger than I would've done if I'd just been Googling it for an hour. Not that different. I'm not saving a huge amount of time, but it's giving me that confidence to say, these are skills gaps I've got. How can I plug it?
[00:17:22] Susi: It would be even better to have a specialist advisor, um, that really knows what they're doing. But you know, a lot of teams and organizations don't, don't have those experts or they don't have access to the experts at the time they need it. So this, this is, this is quite an interesting way we can build, build cap capital, intellectual capital within teams. Um, and that's gonna give us more, more scope to build more products and services, to think about things in a new way, to get new insight. But you know what all of this will do, it's not necessarily gonna save us any time. It just creates more busy work or more things around the seams that we haven't thought about doing.
[00:17:56] Dan: I think that's what I was reflecting on with that previous question is we just do apply, say, capitalism says We'll use more human then to get the, a competitive advantage. 'cause everything else is level. Um, could, so Suzi, could you dive into this, um, this inclusive ai, question. What, what talk, what is that? How inclusive is it? How's it going?
[00:18:16] Susi: Yeah, so I came to, to this topic of inclusive ai. 'cause around a year ago I read an article in the, um, Boston Consulting Group, put a, a piece of research and it had a brilliant headline that spoke to me as a, as a, as a woman in tech marketing saying, uh, senior women in tech are leading the pace with AI adoption at work.
[00:18:34] Susi: Said, oh, brilliant. But actually it was a positive spin on a. A lot of negative data saying that if you weren't like at that C-suite, C-suite minus one level, you were moving behind men in adoption. If you weren't in a tech role, the gaps got even bigger. So I started to unpick a bit more like, well, what does this mean and what's the origins of it?
[00:18:53] Susi: And we, we do hear a lot about AI bias, which is potentially gonna get worse with some of the deregulation happening in the states because AI systems are trained on our history and our history is full of, um, imbalances in terms of perception of age and gender and ethnicity. So I always say the further away you are from a Silicon Valley bro, the worst AI serves you because the training materials are furthest away. So if you don't speak, um, English AI's not quite as good, although Switzerland have just announced a, a new ethical AI tool that apparently works in a thousand languages. So I'd be really interested in trying that when it launches.
[00:19:31] Susi: um, and particularly the gaps, uh, slightly older people, people in rural areas. Unexpected leads are less likely to have that adoption. But some of the recent stats are, are, are actually showing that the gap is getting worse rather than better. So Harvard Business Review did a, a big study analyzing all the other studies and worked out the gap between men and women in AI adoption at work, 25%. So that's huge. It's like what Equivalent of one in four.
[00:19:56] Susi: And um, another study I was part of the adapt this group lately asked people in US, Germany, uk, how much they earned, which was really interesting. And those, those high rollers earning the six figure salaries, they were between two and five times more likely to have, intensive AI training. So more than 20 hours of training compared to those people earlier on in their career earning less than about 30,000 or $40,000. And again, women were. Every single career level, including the apprentices, were far less likely to have an intensive AI skill.
[00:20:30] Susi: So we are moving beyond just dumping tools into teams and saying, off you go, we're moving into a separation where AI will be embedded in workflows for say, customer service agents. So it's not a question of you do ai, it's there, it's part of the tools and tool sets, particularly graphic design as well. We're starting to see, um, these tools be part of the workflow.
[00:20:50] Susi: Um, but then we have another layer, which is the generative AI or AI agents are now unfolding, which is more where teams will need to say, I want these particular tasks completed or assisted by ai, whether it's coding, writing, following, you know, creating and following up in the voices in the finance team.
[00:21:07] Susi: And there you have to put a little bit more of a, a, a process lens and also a creative lens on, or what's the best way of, of adopting and using these tools. But by having more intensive training, particularly with external trainers, You start to move beyond just understanding how to write a good prompt, which is the basic level, and then you start to move up a grade to actually understanding how AI fluency works.
[00:21:26] Susi: So AI literacy is more, how do you do it? What are the risks? You can do that in sort of a a, there's an interesting half day course that philanthropic run for free, for example, when you can do that now. But the AI fluency is more how do we embed this in our work flow?
[00:21:40] Susi: Um, and, and that's where the big consultancies are making payday now 'cause they're going in and, and selling a lot of AI implementation. But of course, all the businesses that aren't able to afford McKinsey or BCG, a lot of them are scratching away because they're investing a lot in software. But the integration isn't working. 'cause hey, legacy systems and AI aren't necessarily great bedfellows.
[00:22:01] Susi: Um, but also just that adoption of people isn't being valued. So we're seeing huge waves of investment in bits of consultancy, software, technology, and not anywhere near as high investment in people and training.
[00:22:13] Pia: so what's a watch out here for, you know, a team that the organization is bringing in ai? Like, how, how do they need to approach it? So when we're thinking from this inclusive perspective. You know, what's a, what's a leadership opportunity here, you know, for the manager? How, how are they gonna make it something that that feels less scary?
[00:22:34] Susi: Well, first of all, I think for particularly the senior managers, well managers at every level, you've also got to walk the walk and informally, I was at a, an event for women leaders last week and um, one, the people leading it said she had been talking to leaders about AI adoption and just said, put your hand up if you use AI tools every day, these CEOs, and she said it was 6%.
[00:22:55] Susi: Now that's terrible. If you are going in on these messages about the productivity and the tools, again, don't be nebulous. Explain what you want people to try it for. Explain your own learning path. Ex be, explain all the things that haven't worked. I mean, I've got loads of lessons and things I've tried today.
[00:23:12] Susi: I, and I've gone, that doesn't work. Sometimes you try again a month later and you know, Hey, ho, like there's so much investment in tools like Chat, GBT and Claude. Suddenly it worked together. A month later, the same prompt, the same task. So. Um, you've gotta walk the walk. Um, provide psychological safety as a team leader.
[00:23:30] Susi: Um, again, give people closer directions, particularly for, um, the more early career people or the people who are more, you know, more worried about this technology. Provide them with specific. Goals and learnings. So one of the challenges we might have with women, and again, there's not really empirical research, but there was a study done by a Norwegian business school asking the business students, would you use AI if your professor forbade it?
[00:23:55] Susi: And the men were like, yeah, I'd give it a go 'cause it would save me time. And the women were like, absolutely not. So there's a sense that we got something called good girl conditioning. A young age at a sort of high school age, women are told to show their work, you know, show what you do. Then if you go into an industry where, if you are, um, in an, an ethnic minority in that business, if you're an gender minority, which you might be in careers like tech for example, you, again, you have to show a little bit more. You are always having to prove what you do, having to work that much harder.
[00:24:25] Susi: And there's a sense that AI is a, a shortcut or it's a button that you press and get an answer. Now those of us who actually use AI know it. It rarely gives you the right answer with pressing a button. It's, it's way more complicated. But there's a sense to be worried about trying it because it, it, it shows that you, you, you can't do your work.
[00:24:41] Susi: And then the other fear, the psychological fear is if I get this amazing efficiency that I've been promised by my boss, or Sam Altman, whoever it's coming from, of this, you know, crazy, 95% efficiency, I'm not gonna have a job, so why should I try? Particularly with creatives, we're seeing this a lot. Designers, creative directors, they're worried that this is going to do them out of a job, so therefore they're not starting. But the irony is, if you don't start. You will be out of a job because you're gonna be less hireable than those of you who do it.
[00:25:11] Susi: So the other thing I, I say particularly to, to, to the women I mentor and work with is gain some skills and put it on your cv. And that's, and I'm talking to women, they're saying, oh yeah, I use AI for this, that and the other. And they show me some brilliant things. I said, I don't see it on your cv. I don't see it on your LinkedIn profile.
[00:25:28] Susi: So if you're trying to grow that skillset. It's gonna become more and more of an essential when you wanna move careers, sidestep or move up to prove that you have embedded some AI learning within your, your, um, skillset of your role.
[00:25:42] Dan: I mean there is a real dark side here, isn't there? There's, there's all kinds of things that AI could take us down and, and unfortunately we are, one of the things that makes me, i'm excited about the prospect, but one of the things that base fundamentally that makes me worry is that it's in the hands of people who've already shown they don't care about destroying the planet. So, but what, so we almost, we have to take responsibility ourselves. How do we do that as teams, as organizations to sort of really ringfence this or to, to make, to sort of make sure the, as well as we can, that we're gonna get a positive outcome outta this for all our stakeholders?
[00:26:19] Susi: One thing to just address is the, uh, I guess the social environmental impact. Now, the bad news is all the main players, the top five AO tools you've heard of, um, somebody did a brilliant report that you'll find on, I'm gonna link to my newsletter this week, which is, which is the least evil? AI tool. And then the answer is they're all pretty crap. They're all pretty crap from a privacy data safety, environmental, but some are less bad than others. So I won't give you any spoilers, but it's, it's worth digging that out and reading it, making decisions
[00:26:48] Dan: the link is in the, is in the, uh, in the show notes? Yeah.
[00:26:52] Susi: but um, I'm seeing some positive wins, particularly 'cause Europe has got some quite strict AI regulation with the EU AI Act, which is now enforced as of this month, August, 2025. Uh, US are going down, trying to go down a hard deregulation front. They might not succeed 'cause the states might intervene and bring in higher regulation. Then also, if you're an organization that wants to trade in Europe, you're still gonna have to go with. European standards, it's like any kind of pro product standards.
[00:27:20] Susi: So, so if we assume you take one of the lesser evil, uh, you know, AI tools that are out there, it's providing guidance, um, broad brush governance and guidance. For the organization, what the expectations are, like what you should and shouldn't do, particularly with confidential and customer data, that's really, really critical education within the business.
[00:27:41] Susi: Then you also need to think about the external communications in your customers. Because depending on the industry and kind of work you do, there's a bit of a myth that everyone wants to see AI powered. Now, I've observed everything from AI powered wine curation to AI in your Barbie doll. Probably not great use cases, but, but you know, maybe it makes a bit more sense in your CRM system in, or in your, you know, sAP tools, some, something more sophisticated. But even then, um, again, there's research by Edelman do some fantastic research in their trust barometer. And they've, they've tracked over the last few years people's attitudes towards ai and it's not good.
[00:28:19] Susi: Um, probably the equal amount of people are enthusiastic as a pessimistic. But then when you look at the demographics, again, women, older people, people in Europe and North America are way more cynical about embedding ai. So the whole being AI powered and having that as a selling point for your business, again, you need to deconstruct what does that mean? What does the customer get at the end of that? Does that mean they're getting a more efficient personalized delivery? Does that mean they can find out where their item is? Does that mean that they can make their own handbag design, you know, the luxury goods space are coming into to this sort personalization Now.
[00:28:55] Susi: What does it actually mean? Don't be nebulous and talk about just AI as a broad brush thing. Talk about it specifically. And I know you've been doing some work with, uh, Squadify on your own AI governance.
[00:29:05] Dan: We have, we have, yeah. We, um, and we, you know, we, we are taking steps towards that to try to set it down for ourselves, I think, to reassure ourselves when there are so many forces going the other way, but also, to, yeah, to project that to all our, to our users and our, our clients so that they feel reassured.
[00:29:24] Dan: I think one of the interesting things, by the way, about that was. We want high, high standards for this like we do for our data privacy. And actually they all make perfect business sense as well for us. We don't want to scrape data, we don't wanna snoop on people's emails. It's all to, in our view, pointless and so corruptible, we want to be totally open and transparent about everything. 'cause that makes, that makes the tool work. You know?
[00:29:49] Dan: So we did. We've, we might be naive. Yes, we are naive, but we are seeing a good confluence of those two coming together. Um, to serve everyone's
[00:29:57] Dan: purposes.
[00:29:58] Susi: and I really, I think that's great. Um, the more specific you can get in your governance, the better. But without, you also need to think about plain language, making it accessible to people. Just like your privacy policy. I actually had to write an AI governance as part of a course I did for University of Oxford. So I actually did it for my own solopreneur business before I'd almost set up.
[00:30:18] Susi: And that was interesting 'cause I was talking about how I would use Gen AI and how I would label it and explain it. So now even like. Even as just effectively as a content and ideas person. I label things with alt tags, for example, to explain what tools I've used. Um, sometimes I explain the prompts I've used occasionally. So it's just even like whatever old level you are at, you can do it. Well, I, I would encourage people to have a look at the Channel four. Uh, so UK broadcaster and they have an excellent, um, AI policy, which explains a very clear language, how it's involved in the program making and what users can ex, um, users and viewers can expect.
[00:30:53] Susi: So. You know, I don't think you need to necessarily go and like hire McKinsey or some, there are brilliant people, by the way, who do AI governance, consultancy and audits, and I would encourage you, particularly with compliance with the EU AI Act, if you're at a, if you're at that scale and trading in Europe to do that.
[00:31:10] Susi: But smaller organizations can also just, just get together and think about it. What does it mean? Don't, don't come up with general statements like, we want AI to be transparent, ethical, and free of copyright scraping and bias, right? Do you know what? 'cause the AI tools that you're buying are doing all that bad stuff anyway.
[00:31:27] Susi: So it it an interesting conversation I had with someone about, oh well it's very envi bad for the environment,. of AI and I said, yeah, the large language models are, but the more niche, more language models aren't. But does that mean you wanna use deep seek who are Chinese? Which might open up a different can of worms. Maybe it's right for you, maybe it isn't. But you know, there are other, a lot of the focus because of where the investment is in what you call the large language models, that's like chat GBT, uh, clawed by anthropic, but actually what we're seeing. For a lot of business use cases is more niche products coming, or what's called AI agents, which do very specific tasks.
[00:32:02] Susi: So again, you might have a, a broad approach for a whole organization, but then you might have narrow small language models that do very specific tasks. And those tasks are generally gonna be like way more, um, uh, efficient on energy, particularly so that helps you with your environmental aims.
[00:32:19] Dan: Thank you. And thanks for passing your eyes over our, um, our sort of nascent policy as well. Our, our, our charter. Um, much appreciated. Um, so Suzy just, um. We've covered so much. Um, and you have, you've helped us to really understand some of this. I love that you've used the word nebulous several times, and I think that's really helpful for people with the word not in front of it. Because the world of AI is nebulous, then our goal is actually to not be. I think, I think that's really helpful.
[00:32:46] Dan: But if you were someone, let's take someone in a, in a classic. working at the moment, they might feel like they should be starting to look at this or do more, they might even feel a danger there that their job will be taken away. what, what, what's their baby step? They could take just something simple. To take a, to take a step into this world.
[00:33:06] Susi: I mean, I'm always blown away when, 'cause I'm part of a few AI women in AI groups and people share tips and tricks and I think, oh my God, that sounds really complicated. So I try and make an aim to try and do one thing every month even, which is my baby step to say I'm gonna try and automate something that I wouldn't have thought about automating before.
[00:33:24] Susi: But I, well, first thing I think is just observing and reading. So, so my suggestion is there's usually people who write specifically about. AI and related topics in your industry. I have a newsletter on substack. There are other publication platforms, but maybe subscribe to a small number that are very specific to your industry, and then subscribe to a few other thought leaders and followers over on LinkedIn or it's always like, um, Substack or beehive of All Ghost, where people write, um, that are more general.
[00:33:53] Susi: So you can subscribe to my newsletter, rethinking the Hype Cycle, which is aimed at business leaders with particular tips for some marketeers and content as well. 'cause of my background. Um, people like, um, Ethan Mooch are very good because he works with all the big, um, AI providers, but also like he works with big multinationals. He's a Wharton professor, but it's, but it's got a practical lens, so I think you get a realistic perspective of what the cutting edge in, um, uses are without the hype.
[00:34:20] Susi: So to just start to, to follow and observe is the first step. And the other thing is, um, is what I've been training people in my world for the first time on what to do with ai. Um, just, just bash away this, they're called. Engines 'cause they're designed for chat. They want you to chat to them. Unfortunately, they want you to chat and give away way more information and spend more weight, more time than you might want. But the point is to just go on and chat and try and do things, and at the end of your experiments, then the next question is, what would be a better prompt for this next time?
[00:34:53] Susi: So don't get caught up in all this. I have to buy a thousand prompts for someone. On some dodgy website or, I don't know what a prompt is. I hate the word prompt engineering and I'm glad it's being phased out a bit.
[00:35:04] Dan: We haven't heard that for a while, actually.
[00:35:06] Susi: We haven't heard for a while but what I disliked about it was it, you know, I'm not an engineer, i'm from an arts background and again, it's quite. It's quite specific language that you, it either appeals to you 'cause you're a techie person or you're an engineering person, or it doesn't appeal to you. 'cause maybe you're from an arts or a philosophy or an education background, and that word means nothing to you.
[00:35:24] Susi: So we're starting to see, um, language, um, models being easier, understanding a bit more, being, uh, understanding what you want a bit more. Um, but yeah, you can also like ask it to help you if you dunno what you're doing. Say, I'm, I'm stuck, I'm trying to do this. What would be a good prompt? And it will, it knows 'cause it's the ai, it's used to working with prompts.
[00:35:44] Susi: So it's sort of like you can work backwards as well for through your tasks. Um, and see it more as an experiment rather than like a, a task and finish. But be important if you're a manager, allow people time to experiment and share those learnings together. What did we do that worked? Was there a particular prompt? Can we keep a little prompt library internally so we can share that learning between us.
[00:36:06] Susi: You know what, what do we balls up? Well, what, just like some things just don't work. And always remember, AI is not a, a, it's not a, it's like an unreliable calculator. It's calculating based on the averages of information shared before. And hallucination people will say in the AI space, it's going away, it's getting better. But all the research I've seen suggests it's flatlining or getting worse. 'cause AI is designed to be a question and answer machine. It answers. Whatever question you want, not necessarily with something truthful or reliable from its database, it'll fill in the gaps. And sometimes it, 'cause it's, it doesn't understand.
[00:36:40] Susi: Uh, to give you an example, I was writing about music, um, um, and there's a very big difference between sampling. Covering parody pastiche if you're a musician. You know, these are all separate things, but it mashed all this up. When I asked it to write about a song, 'cause it didn't understand, it just pulled little scraps of information published elsewhere. And I, as a musician, I knew it was wrong.
[00:37:02] Susi: So, um, Ethan Mooch also says to play around the edges. When you know your topic area, you can question it and test it and see how reliable it is. And at that point you might get a shock about. How unreliable. It's, uh, but remember anything that's about, um, repetitive data, numerical data patterns, patterns in text as well, patterns in insight. It's great for pattern matching. It's, but it's less good at actually providing factual information that's accurate. Back to you.
[00:37:32] Dan: Brilliant place to start. And final question for you, CZ, we, we always end with this, but a media recommendation, so it could be, it could be this could be informative, educational,
[00:37:41] Susi: I'll give you some serious and a less one. I'm reading, I'm reading a book, um, and I heard a talk by a lady called Palmie Olson, who, um, is a tech journalist at Bloomberg and her book is called Supremacy. So I blogged a copy at my network. Called TBD in London, uh, that would recommend, um, coming to and get in touch if you want the details of that.
[00:38:01] Susi: Um, and it's really like a, a narrative story about how, uh, AI came to be. So I'm, I'm delving into that book, but it'll be a lot of, bit about the God complexes, and this is important to know. So people like you, Sam Altman's and you, Martin Zuckerberg's, they believe their own hype that they will go into Mars or space and destroy everything.
[00:38:19] Susi: And that's why we need to be understand where this philosophy comes from in order. For us to, to move a, make a counter movement. So I, I mean, I'm diving into that. Uh, threads was on TV last night in the uk, which I've seen before. And it's a film about the, uh, a new apocalypse happening in, I think 1970s Sheffield.
[00:38:39] Susi: And it terrified me. It's terrified everyone who's ever seen it, 'cause it's done in a realistic documentary style. But I, I'm, I need to persuade my partner to, to watch it again. And I was actually talking to a nurse who'd had nuclear nuclear attack training about this.
[00:38:54] Susi: Uh, so I think sometimes just going deep into the apocalypse, just sometimes a little bit.
[00:38:59] Susi: But the, the reason I like this film is it's very hyper realistic. It actually uses, um, real animations that were made by the government about, uh, what to do in the event of a nuclear strike, including putting your bodies out, burying the bodies outside so they don't. Make your house toxic. It's a super interesting historical document, but it's very scary. So what, you know, watch it when you're in a a, a good mood. Um, that one, and then just final, sorry, you got this out. There's too many,
[00:39:25] Dan: we've had God complex
[00:39:26] Susi: I've had my in an AI style. I, as was part of a charity project, I had my voice cloned using AI for a show called Offal. And I think the website is called Offal Offal Offal. And it's a weird noise art experimental sound collage thing and a magazine. So if you are into things like if anyone remembers Blue Jam, it was Chris Morris's radio show. It's got a vibe of that, but I'm very delighted to say I have been Offal.
[00:39:54] Susi: So check out the latest, um, episode of Awful and you'll hear my voice saying some very strange things that I didn't say.
[00:40:01] Dan: Well, we'll dig those, we'll get those links into the show notes so they're easy to, uh, we've got every, yeah, sort of, there's a, yeah, God, God, complex, apocalypse, and Offal. Um, what, what could be better? What could better. But Suzi, thank you so much for being on the show today. It's been fascinating and really useful, I
[00:40:18] Pia: It is a great, great topic to get into, so really appreciated your view on it.
[00:40:23] Dan: and you've made it less nebulous, which is, which is wonderful. So, yeah. Thanks so much. C.
[00:40:32] Dan: You know, this topic of AI reminds me of the, it's, it sort of fits into and reminds me of the, um, the bigger topic I think that we see with teams that there's so much change now, that a lot of teams just freeze. They get stuck. They dunno which way to move, and so they lack clarity. And we're seeing so many teams, aren't we, without a really clear idea of what they're doing.
[00:40:54] Dan: There's a vague notion of the direction. It's not great. And AI is like that. And I think listening to, you know, listening to Suzy just reminds me that what you, you've gotta do something move. You have to move forward to explore, to understand. You can't sit and intellectualize this thing.
[00:41:10] Dan: There's so many factors, you know, I was sort of, when I think about ai, I start thinking about is there gonna be universal basic income? Who cares? Who cares? Just do some of your team to move forward. That's the, that's the key that I think, that one of the things that comes out for.
[00:41:24] Pia: to be fair, there is so much and there's quite a lot driven down. From the top in organizations and it's still all a bit experimental. So I think that was interesting when Suzi talked about, you know, how much time we would save. We're not really saving very much time at all. So I think we've gotta get really specific about what, what it's for, what you are using it for, and how it will add any value and how that becomes a human experience as a.
[00:41:49] Pia: As a team, um, and something that could be shared rather than a threat response to getting greater shareholder
[00:41:56] Dan: Yeah, it's so, it it, that's exactly it. It, it's piled the, the messages we're getting are fanta and even, you know, people like Klarna here in the uk, here in Europe, you know, they laid off loads of people immediately then had to rehire them. The CEO had to apologize because for some reason they believed the hype
[00:42:13] Pia: trigger. Happy, you know, we should be better than that.
[00:42:16] Dan: that's a great way of putting it. We should be better than that. He should have been better than that. Um, and, and also these previous revolutions, as we've said, have, have led to, right, right. You save 20% of time. What are you gonna do with that 20% of time to add value against the competitors?
[00:42:30] Dan: Because your comp competitors have the same technology. So in the end, hu, we want more humans to. Give us the edge Um, but we don't know which way it's all going to play. But the key is to sort of make, sort of start, and your point about humans, I think is, is the key. You know, a week ago we were in Milan.
[00:42:48] Dan: We had the, we had a, a think tank there with, um, with Sunni Lobo who's been on this show, and she's over from the states, from visiting from Silicon Valley. and she was talking about AI in the workplace. She's really experienced CHRO. And we had a, a really enlightening discussion about all these topics.
[00:43:06] Dan: And where we ended was, this simple point actually that, should do a, we should use AI for things we don't want to do. If that seems to make sense, but let's just make sure that one of those things is not interacting with each other. You know, if we delegate our human interactions to the ai, then we've actually on an, on an on, on, on a planetary level.
[00:43:27] Dan: We've lost, haven't we?
[00:43:28] Pia: Yeah, absolutely, a hundred percent. And, and that bit we are, uh, we're seeking to almost replace ourselves and, and in doing so, you know, lose the whole fiber of working together,
[00:43:41] Pia: being in a professional working environment. Um, so I, I, I think actually we gotta look. Stop looking for necessarily just as a way of shortcuts both on a sort of organizational level and an individual level, but adding value and there is a lot of value
[00:43:56] Dan: They're huge, huge. 'cause there's a lot of drudgery being done at the moment. Really dull tasks even, I mean, even if we are trying to plan a holiday at the moment, it's still, even with AI's help, it's, it's, it's, it's really the information's out there, but it's just very dull, very dull work
[00:44:15] Pia: I, I would try and feel sorry for about a micro over a second, but as you are trying to
[00:44:19] Dan: Did you not feel sorry for, are you, are you playing the smallest violin ever? Um,
[00:44:23] Pia: is, it is so microscopic. I can't even put it on my finger.
[00:44:28] Dan: it's fair enough. No fair call. I don't, I don't expect too many tears, but I, anyway, it's very boring. Um, but I, I think that the, the, the, the phrase that I heard recently, which is actually one of these simple things that's guiding me and my thinking around all this, which is, um, could the, I'd like the AI to do my washing while I write a song, not the other way around.
[00:44:46] Dan: And
[00:44:46] Pia: yeah, Exactly
[00:44:47] Dan: Exactly it isn't it, and doubling down on the humanity. I'd like to talk to someone while the AI work does that spreadsheet. Not the other way around. You know, I think we have to double down on the humanity and, uh, use AI to do that. yeah. And on that subject, this is sort of live for us at the moment, but we, at Squadify, we've built this, we're building with Juliet's help the, well Juliet's doing it, leading it, uh, an AI charter. How are we going to use this in the future to really get our heads around again, sort of moving forward into this space? We have AI in the platform of course, but how, how are we going to act and let's set out a code now of principles that really puts that in place.
[00:45:27] Pia: And I think it's also important to differentiate yourself. So you know, as we're in the business of having permission based information and that information being utilized for the benefit of the people that have. Input it, but it is still all anonymous and confidential. And, um, there's certain AI that will track your emails, um, sentiment, observe the sentiment that you are writing emails and information and you, you are having information taken that you don't even know.
[00:45:55] Pia: So I think it's really important
[00:45:57] Pia: to declare. To your users and upfront, what, what are you aiming to achieve and, and how are you doing it? And what will you do and won't, you don't, you know, they're like standard sort of SLAs,
[00:46:13] Pia: you know, between, between us and the people who are, who are inputting data. So I'm really excited that, um, you know, Juliet's led this and we're on a front foot of this, um, as an organization and it. It'll be something that anyone, anyone could look at, but we'll be really clear where our stance is.
[00:46:32] Dan: Yeah, I think that's right. And I, you know, in Squadify's case, I mean, people can, our listener can think about their own, um, use cases. But you know, where we've got, um, Squadify coaching team leaders and team members to be better collaborators actually using AI to enhance human relationships. we think about that being a human coach, um, you want to know where their information's come from. They need that person to trust them. Would need to be transparent.
[00:46:57] Dan: If your coach said, oh, well, um, rumor has it that, um, you use some aggressive language in an email. I can't tell you where that was, but apparently you did. Um, you think, what the hell? You know, I feel in a way you can think about the ai, what would you trust in a human? And they need to be transparent about their sources. They what? They need to have the right intent. They need to be transparent about their sources. And also transparent about where the information you give to them goes.
[00:47:23] Dan: So, you know, a coach doesn't go telling everyone, you know, the coach coaching conversation cannot end up on your Microsoft 3, 6, 5 instance, you know, train or even training the global, uh, AI model, you know, so it's, it's really is about containment and transparency in that sense, so that, so you get a good outcome.
[00:47:45] Dan: It's been exciting. It's been really interesting to do that. And we will publish that, um, those guidelines in case they're of use to, to anyone. Um, and in fact, Suzi cast her eye over that, which has, uh, was really helpful.
[00:47:55] Pia: is great. Really great. And, and it's also taken quite a lot of work. So it's not just something that we've just, uh, loaded into AI to get that to happen. Um, it's actually something that we have really thought about. and sought external sort of validation and
[00:48:10] Pia: verification from trusted sources.
[00:48:13] Dan: you are right. It's an interesting one actually. 'cause Julie and I, we've had conversations about it. Julie and I then. Recorded a conversation transcribed by ai, had it summarized by ai. We then combined got AI to combine that with the, the emerging standard on, on ai, um, usage, 40 2001.
[00:48:32] Dan: And then, um, Ian, our CTO. Cast his eye over it and had input. You had input. And then we've had Suzi, a third party over. So there's quite a lot of human su We've used AI for the sort of boring stuff, but actually there's human supervision here, so, um, it's a, it was a, a good, good way to do it. I think so. And that will be helpful for us.
[00:48:52] Pia: and you want it to be like that, so we have accountability for the ai. Um. Copilot or chat, GBT doesn't. So, so if it goes wrong or someone has a gripe against, we are the responsible ones. So I think that's a key part too. the humans are the responsible ones.
[00:49:09] Dan: Yeah, exactly. Exactly right. And we need to take that responsibility seriously. So, yeah. But, um, but that's, that's gonna really help us, I think, in the future. But that's it for this episode. We, not Me, is supported by Squadify. Squadify helps any team to build engagement and drive performance. You can find show notes where you are listening and at squadify.net. If you've enjoyed the show, please share the love and recommend it to your friends. We Not Me is produced by Mark Steadman. Thank you so much for listening. It's goodbye from me.
[00:49:37] Pia: And it's goodbye frmo me.