CIM Marketing Podcast

Episode 118: In this episode of the CIM Marketing Podcast, host Ben Walker is joined by CIM Course Director and Strategic Growth Consultant Steph Inez Matthews, and Peter Gaston, innovation lead at VCCP and co-founder of AI-first agency Faith, to unpack the “AI efficiency paradox”.
AI tools like Midjourney, Adobe Firefly and Google’s Nano Banana make it easier and cheaper than ever to create content, but are we entering a golden age of creativity, or do we risk our social feeds becoming saturated with ‘AI slop’?

Together, Steph and Peter explore how marketers can harness AI for speed, scale and cost-effectiveness without sacrificing distinctiveness, brand integrity or ethical standards. They discuss how AI is reshaping creative roles, from designer to prompt engineer, why human insight still sits at the heart of effective marketing, and how smaller brands and SMEs can now genuinely punch above their weight.

You’ll hear real examples from agency practice, client-side experience and AI-first campaigns, as well as the practical realities around data security, IP ownership and client expectations. The episode does not shy away from the hard questions: bias baked into models, the risk of cognitive de-skilling, legal grey areas and the very real possibility of an 'industry disaster' if governance does not catch up.

This is an essential listen that will leave you better informed, slightly challenged and ready to set your own AI standards.

This podcast will: 
  • Explore how AI is affecting the creative process in marketing
  • Ask whether marketers are prepared for the copyright and bias risks AI presents
  • Examine the 'efficiency paradox' - the risk that AI improves efficiency but fosters homogeneity
Thanks for listening to this episode. You can share your thoughts and feedback in our survey now, or contact us at podcast@cim.co.uk.

Creators and Guests

BW
Host
Ben Walker
Host, CIM Marketing Podcast
PG
Guest
Peter Gasston
Creative Innovation Lead, VCCP
SM
Guest
Steph Inez Matthews
Course director, CIM

What is CIM Marketing Podcast?

The official podcast of The Chartered Institute of Marketing.

We spotlight actionable strategies, highlight best practices, and source expert opinions to help you elevate your marketing game. Whether you're focused on digital marketing, content creation, SEO, strategy, or something else entirely, we’ve got you covered.

Get the latest news, trends, and insights from across the industry. Hosted by Ben Walker, this podcast offers accessible, relevant discussions on the big issues impacting marketing today.

This podcast is your essential resource for staying ahead of the curve.

Ben Walker 0:11
Hello everybody, and welcome to the CIM Marketing podcast. And you know, today, we are going to be looking at the AI efficiency paradox. How do we scale this amazing technology without introducing more slop as it has become known? Technology makes creation easier. It makes it effort. In some cases, does strategy, marketing strategy, then become more valuable, or does it have a greater risk of becoming invisible? And to answer this very question is Steph Inez Matthews, who is a strategic growth consultant, and Peter Gaston, who is innovation lead at vccp. Steph, Peter, how are you two today?

Steph Inez Matthews 0:56
Grand Spring is here.

Peter Gasston 0:59
Yeah, also very well. Thank you very much.

Ben Walker 1:02
As our regular listeners will know, this season, we've been focusing on creativity and innovation. Now marketers are, of course, very keen and quick to adapt and embrace new technologies. We've been discussing on this show for many months now about how best to utilise these tools. First of all, before we talk about, you know, the efficiency paradox, the slot and the tension between ease and innovation and creativity, let's talk a little bit about the scale of the impact that tools like mid journey and nano banana have had. What do you think has been the major changes to the industry that these sorts of tools have caused in recent months?

Steph Inez Matthews 1:42
Well, I definitely say in terms of the impact on creativity, there's the opportunity to get to better quality faster like, especially if you have the AI middleware, so like your pencil, your art list, or even your Adobe Firefly, you've just got access to all of these tools, you know, a much kind of faster pace, so you can get to the output that you want a better quality in a much kind of faster way. And of course, there's the impact on the production costs. Arguably, it is a lot more efficient. But I think for me, the challenge is the it's like the tasteless you kind of mentioned it the AI slot. If anyone can kind of use these tools that's we're going to get this kind of high volume of tasteless slop if it's in the wrong hands. So I think we still need humans. We still need taste makers. We still need those that have got that creative judgement, and I that can kind of oversee using these tools to make sure that we are getting better quality output. And I always like to think from a responsible lens. My background is in inclusive marketing, so I think for me, it's like there is a danger if we have people using this content and creating it, that we've got a higher volume of content that's being created, and the bias that is being built in is replicating at a much faster scale. So we have these tools, we have the creative but we still need the humans to sense, check the outputs based on that quality perspective. IE, is it taste? Does it meet the brief, and is it unbiased?

Ben Walker 3:08
Come to about the bias element later in the show. But first, let's deal with the taste. Peter Gaston and indeed, the potential for homogeneity that you know, everything may have good taste, but it may look very similar to what other agencies, what other departments, are producing, yeah.

Peter Gasston 3:22
I mean, first of all, great brands are built on distinctiveness. Always have been, and there are a lot of brands already in the market that are not distinctive brands. Many brand websites just use generic stock photography, or there's a meme around all AI websites use the same colour palette and use the same type of imagery. So that's always been a struggle, I think, with the new tools, yes, that struggle is elevated more because the barrier to creation has dropped. But while the barrier to creation has dropped, the need to stand out, and the effort required to stand out has actually gone up, like when we launched faith a few years ago. I was told so many times by people in the industry, all these tools are the death of creativity. They're the death of imagination. I've always believed the opposite. I think if you are a creative person given more tools to create more ideas, directly, you will be tasked now to be even more creative, to stand out from the people that have got access to the same tools, but don't have your imagination or your vision.

Steph Inez Matthews 4:23
Yes, a bit like when Adobe Photoshop was launched, right? It was just a kind of another tool that creatives can now use. This is like the next kind of era, but obviously a lot more elevated, but you still need that creative at the end who's inputting and using Photoshop to kind of create the output that you want, that is on brand, that is on brief, that does meet the objectives you need, that taste maker. You need that creative.

Peter Gasston 4:46
I mean, I always use the comparison of like, everyone in their pocket now has a machine that's capable of shooting 4k photos In Perfect Light with machine learning, and you can record videos and. And you know, some Hollywood movies are being shot on the iPhone

Steph Inez Matthews 5:03
Seedance!

Peter Gasston 5:03
Yeah!

Ben Walker 5:03
So what's the but for the benefit of the audience, what's the Seedance? You say it's been shot on an iPhone?

Steph Inez Matthews 5:10
Seedance is like Hollywood quality. It's bite dances, new kind of AI video tool. Sorry I didn't mean to interupt Pete.

Peter Gasston 5:18
That's alright. But yeah, where I was sort of driving at was we've all got this thing in our pocket which is capable of things. But why do couples still pay for a wedding photographer? Yeah, even though everybody's got a perfect camera, because the wedding photographer's job is not just to capture photos, it's to capture moments and have an eye and have a vision. Why aren't we all Mr. Beast on YouTube, you know, earning millions because it takes more than the technology. The technology is, in many ways, a sideshow. I think

Ben Walker 5:46
Just unpacking that slightly, the thing we always talk about is cut through. You know, impact, cut through to our audience. Does the cut through? Then, to unpack your point a little bit more. Does it depend more on the quality of the prompt or the underlying human insight behind it. What's the delivery mechanism here?

Steph Inez Matthews 6:05
For me, it's the human insight. It's always been the human insight, regardless of what tools you have. It goes back to the human insight, because marketing is about understanding a business problem that marketing can solve. Can marketing or creative come up with that solution? And they have to understand the problem, the insight about the human, the customer, the market. And that's critical. AI can help with that. It can help, certainly on the ideation. But for me, that's always been key.

Peter Gasston 6:33
Yeah, prompting matters. The difference between getting a good image and a decent quality, but ultimately empty image is the quality of the prompt. And we work in advertising and marketing, we're not just trying to make beautiful images. We're trying to make images and videos that communicate brand, that communicate a moment, that communicate emotion, whatever that is, some of that comes through prompting. And over time, that's going to become less important as tools like Google's nano banana and now sometimes it feels like they're reading our minds and actually rewriting the prompts for us in incredible ways. But ultimately, it's still the person who invokes that prompt, or whatever tool we're going to be using in a year, the person that invokes that and the person that judges and assesses the output will always be the most important thing.

Peter Gasston 7:20
Could anyone potentially call themselves a graphic designer? Because they are able to write a prompt that the AI can understand to deliver what they want is now these skills, these classic sort of marketing roles, these classic marketing vocations, going to be more and more about less pressing buttons and more about shaping prompt and obviously having the original idea.

Steph Inez Matthews 7:41
Oh yeah, they're a prompt engineer, not a graphic designer, really, aren't they? That's the difference. And I think marketing teams, creative agencies, they're going to kind of recalibrate in terms of the job roles and kind of the skills that are required. But I do think there's a difference between a prompt engineer like, I'm not a graphic designer, neither am I prompt engineer, but I'm probably more likely to be a prompt engineer than I am a graphic designer, because I haven't gone through that kind of school of training into kind of understand what that is.

Peter Gasston 8:10
Anyone can call themselves a graphic designer, but you know, can you get paying clients? Can you get employment as a graphic designer? Is probably a different matter. Having a graphic design background, I think, is incredibly important and will remain important into the future, although that may be a different way that you're taught to be a graphic designer, but the importance of seeing and thinking is incredibly invaluable. One day, you know, we may get to artificial general intelligence, and as everybody in Silicon Valley hopes we will, and you know, this conversation becomes moot, but, but, but right now, I think it's very important to think like a graphic designer, to to have an eye, to have taste. What I do like about the way that this technology is being democratised and the barrier to creation coming down is that it possibly enables new voices to come into the industry that may have had barriers there before. I look at some of the work that people on Instagram creators are doing, people with no formal training, who may never have thought of being a graphic designer and are making really, really beautiful and incredible things.

Ben Walker 9:13
It's removing barriers to entry. Speeding thing up. We talked about speed at the start. We also talked about the C word cost. It's reducing cost, in many cases, beyond speed, cost effectiveness and reducing barriers to entry. Are there any other big benefits that you see of this technology coming in, in a great way, into our industry?

Steph Inez Matthews 9:36
In the way that it's kind of allowed content creators, it's given them the skills. It's given content creators the skills, I think it's also giving SMEs so smaller, medium sized businesses opportunities as well, to really kind of capitalise and grow their businesses and reach new audiences in a way that they haven't been able to before, which I think is good for the economy. It's good for the number of founders. And entrepreneurs that is increasing, so it's really kind of improved the toolkit for them.

Ben Walker 10:05
That's interesting, isn't it, Peter, that it helps the Davids disproportionately in comparison to the Goliath

Peter Gasston 10:11
Yeah. And I think that's absolutely true. I think it's most effective and most impactful when you see businesses using it to punch above their weight. You can often see that in the reaction to content as well. Like big, established multinational brands that are using AI, whether out of a genuine sense of curiosity or the desire to just do something cheaper, they get judged much more harshly by the public. You saw the reaction to McDonald's and Coca Cola and recently Gucci. They're like, you know your big brands, why aren't you paying for humans to make art for you? And again, all of this is subject to change, but I think right now, that's a stick that's used less to beat smaller brands that are clearly using it in ways that they couldn't before.

Ben Walker 10:54
Nevertheless, Nevertheless, despite all this good stuff, there is a bit of a fear that if we let more and more go to machines, we're nowhere near AGI yet, he says optimistically. But we may be wondering, well, but nevertheless, nevertheless, there is still, or we're told there's still a clear and present danger that the more that we use AI tools, the more that we de skill ourselves, the more that we de skill our teams and our industries, is that a fair fear? Steph Matthews

Steph Inez Matthews 11:28
I think there is a danger. It's kind of that idea of kind of cognitive offloading that if we have so much AI built into our processes and systems, that we might forget to think and we might just kind of outsource it all, which I think is an incredible danger and should be kind of monitored. And I think that's kind of where clients will pay for kind of the strategy that what you can't get through the AI, like outsource all of the kind of stuff that doesn't require the deep thought. They're just the kind of the process type workflows, and make sure that you are still doing the strategic thinking, because otherwise we're going to lose human judgement. And we've just always been talking about how taste and judgement is actually quite key, and one of our USPS both as an agency or as a kind of freelancer. So we don't want to lose our USP in this kind of quagmire of potential AI slop that all gets automated out to the AGIS.

Ben Walker 12:21
Exactly. That's the big promise, isn't it? But that's the sale. That's the retail offer from big tech, isn't it? Stop you doing the mundane stuff, remove the routine from your lives, and we'll give you that extra capacity, that extra latitude to do the strategic thinking, the creative thought that really delivers quality. The critics of that promise say, Well, you know, that is what industry always promises. But what actually happens is, when a hole is emptied, industry finds a way of filling it back up again. And instead of you're getting the looking out of the window, doing the big picture, strategic thinking, what actually happens is, you're just asked to do more production. Which is it? Peter Gaston.

Peter Gasston 13:01
I suspect it largely depends on how clever your company, whether it's large or small, is. I think, yes, you're absolutely right. This is sold. Is like, let's remove the drudgery, let's do automate away all the tasks, and you can just focus on thinking. I don't think that's a realistic proposition for most companies. I think we're already starting to see companies maybe not bringing on so many juniors that they would have brought on in the past. I personally think that's a recipe for disaster, because you're not going to retain your staff forever. But yeah, I mean everything you talked about before, like the cognitive de skilling, or the cognitive offloading, or whatever it is. It's a concern everywhere that automation has ever happened, from pilots to creatives. It's a concern here. I also think you have to find a balance of you know, you can do the job of four researchers in half an hour now maybe. But is that right? Is it correct? Have you read it? We've heard so many horror stories about, you know, lawyers going to court, telling the judge about completely hallucinated cases. Eventually, like everybody is responsible for the work they produce, and has to take responsibility for that. So if I produce something and hand it to you, and then you come back to me later and say, What does this part mean? And I don't know, then I'm the idiot, regardless of like, whether I've used AI or I've paid a company to go away and do that research for me.

Ben Walker 14:28
Faith, which is you mentioned earlier, this is an AI, first creative agency. You know some of the people listening is they're working marketers, as a rule, may still not know exactly what you mean by an AI, first creative agency, so maybe give us a couple of minute whistle stop tour about what you do and what the philosophy behind that is.

Peter Gasston 14:48
So we were created about three years ago. My job, broadly as innovation lead, is to keep up to date with all technology. Three and a half years ago, I was telling people there's a thing called generative AI, and you should pay attention to it. But there. The output was rubbish at the time, not absolute rubbish. Mid journey existed, but it would output like a tiny little image that didn't make a lot of sense. It wasn't useful for anything. And then chat GPT launched, and then mid journey four launched. And those were two sort of watershed moments, and we saw a lot of early adopters at vccp, start picking up these tools and using them. And as we observed them, we saw that they were using them to do their job better. They weren't using them to not do their job. They were using them to do their job better. So we thought, okay, we need to pay attention to this. And we set up faith as a it's sort of an agency within vccp. It's kind of like, I don't know what you call it, how you describe it, like a best practices centre, a brains trust or something, essentially, but it was our job to evaluate and make sense of the sheer mountains of hype coming out from both directions, about AI, and work out what it was actually useful for. So we started off by experimenting with tools, then choosing a couple of clients that wanted to come on the journey with us and discover things. So we are not the agency that does all the AI work, because I think that would be absurd to even countenance that like in a big agency, there's so many different ways to apply AI and so many different ways across the full length of the marketing funnel, but we act as an agency who, yes, we have our own clients, but we also sort of act as part of vccp to make sure that we're always considering how AI is going to impact whether there are new ideas that become possible with AI that weren't possible before. Like, I work with Daisy, the scam busting granny that we did with o2 like that was an idea that just had been a feasible like you could think about it for a long time, but didn't become possible. And so that's what an AI first agency is. We're not just making images and making videos all day. We're considering the impact of AI where it's useful, where it's practical, where it's desirable, and we're taking that back out as knowledge and information and work.

Steph Inez Matthews 17:01
Can I come in there and say, as a former client, I think that's great that you doing that as an agency. Because from my experience, I felt like the agencies were quite slow to let the clients know about the impact that AI could have. I think because the way that it can transform the creative development production process, and therefore their margins and the cost. So I found myself having to kind of push and go, you know, what are the tools? How is this going to help me in my role? Because it wasn't always proactive,

Steph Inez Matthews 17:28
Yeah, because everybody just hears the story about efficiency. Oh, it's going to save us so much time and so much money. But, you know, maybe. But let us think about it first before we swear to that. Because, you know, right now, there's so many more impediments to using AI that aren't just cost. They are how the public will react to them. Some of the craft that needs to go into work, some of the legal situation is still very uncertain and unsettled. Yes, client marketing teams get very excited about AI. We're excited about AI, but the practicalities of using it are a different category.

Steph Inez Matthews 18:03
Yes.

Ben Walker 18:03
We'll come to the copyright and get into that a little bit later, I think, and also into the bias element, which Steph raised really early in the show before we get there, though, just drop you get a little bit of an advantage if you're going out to the market as an AI first agency, because the fact that you are using AI is then expected by your clients. You know, no one's quizzing you on why you're using AI when your whole sales pitch is that you're an AI first agency.

Peter Gasston 18:27
I think also because we sometimes tell our clients don't use AI, this is not the right tool for the job. We're an AI first company, but we're not here to just blindly sell you AI because we've had a relationship with companies like Oh, two and EasyJet for 20 plus years, our relationship is not simply to sit there and try and sell them things, but to be an honest and trusted advisor and counsellor and partner.

Ben Walker 18:49
Are you honest about the opportunities for AI, but also the threats or the limitations at least, of using it in some cases, on some briefs, but at least your customers expect you to use AI some all the time, as of course, is not always the case for all agencies, and actually, there's been some quite high profile examples of consultants, of agencies not telling their clients that they're using AI. Now it's an interesting philosophical question is, Should you tell your clients every tool you use, if I do something on Photoshop, should always declare that I've used Photoshop. And you could go on forever. Do I said I wrote this with Microsoft Word? You could be rather silly about it. So to what extent do you think that in the industry, do firms need to tell their clients that they're using AI and what for? Or should they, you know, keep it to themselves.

Steph Inez Matthews 19:41
I'm going to use the example of it's not kind of marketing specific, but did you see in Australia, and they created a report for the Australian Government, and they didn't declare that it was using AI, and the sourcing was all incorrect because it was hallucinations, and they had to kind of pay back their consultancy fee, which, so I think if you're kind of. Declaring that you're using AI you're not kind of sense checking your sources before you kind of send that on to the client. There's not only a kind of brand reputation risk, but there's also a fiscal risk there as well, because they've kind of lost the revenue there, and they've also lost the client and their reputation in the market. I'm going to reference the CIM have a set of responsible marketing AI principles that they share with their members, and it's about one is about acting ethically and responsibly. The second is about ensuring quality and rigour, so kind of the lack of AI slop. The third is be transparent, so communicate when you have used AI. And then four is about building AI literacy so that you can always kind of stay on top of the trends and how they emerge. So I would always kind of defer back to being transparent, being open, and particularly in that we've spoken about the legislative area, it's very murky. It's very grey. You know, who owns what? Who has the IP? I think in that case, it's better to be on the front foot and be a responsible marketeer and share when that has been used, particularly from a consumer perspective, as well as kind of between agency and client as well.

Peter Gasston 21:05
100% agree, you know, we would declare to a client that we would use AI in the process. We wouldn't get down necessarily to the level of listing all the tools, but say it was in the strategic research, we would say, you know, this is based on synthetic audience research that we did and hasn't been human verified, but then during the creative process, you wouldn't ordinarily tell the client that we've used AI to generate some images for a pitch deck, but sometimes it's for a brief and that client might have a requirement that you don't use their logo, their brand, in off the shelf tools, that type of thing. So you would disclose that then and then for the output, you would always disclose that to them as well, because there's questions of copyright, there's questions of like, audience, reception, yeah, so yeah, transparency all the way through the process. Like I said, you don't necessarily get down to saying we use this for this at this stage, but like some of our clients, one client in particular, they want us now to go back to them with like animatics for this campaign. We don't want to see images. We want to see animatics, but we don't want you to use our product anywhere in these animatics that you produce for us, because we've got a blanket policy that you can't use our brand in any tools yet, unless we sign a contract. So it really varies. But yeah, we have to be 100% transparent.

Steph Inez Matthews 22:23
Yeah, I've had to kind of list approved tools, because some of the corporates won't allow certain tools to be used. So any of the third parties or our suppliers would have to kind of ensure that that kind of list of third parties had been approved.

Peter Gasston 22:37
Yeah

Steph Inez Matthews 22:37
It's quite stringent.

Peter Gasston 22:38
It's part of the reason, although we experiment with a lot of tools, our day to days are largely the tools to provide it by Google, because they are. You know, with so many years of experience in we're a workspace customer, we've got certain service level and privacy agreements and data transfer agreements with them, and that that just helps so much.

Steph Inez Matthews 22:55
Yeah, data security, that's key for me. It's like that is that, yeah, it can create the content for you, but where's my data going? Where's the company data going? Is it in a walled garden? Like, I know it's enterprise, but it does. It kind of stay there. What are they doing with it? Like, as a brand, you have so much ownership over your brand, and to kind of give that away? Well, we already give it away to the media platforms, but to knowingly give it away now feels like we've learned the lessons from social media, and we're a lot more kind of aware and cognizant of how to protect our data.

Ben Walker 23:25
You've got the data security sorted. You've been transparent with your client. How does industry keep an eye on quality control? You know, we talked about democratisation earlier. You could argue that democratisation could go too far. Departments could just start using staff to use AIs. They may not use agencies at all. They may say, Well, we can do this in house. Here are the tools. Off you go and do it. How do we guard against as an industry more widely, you know, this sort of March of slot done by people who really haven't got the skills to use this stuff, or haven't, perhaps, in some cases, even got the insights and the ideas.

Peter Gasston 23:59
Well, yeah, I mean, it's brand guardianship is something that you have to pay 100% attention to, and the the more you automate something, the more that becomes important. I said earlier that great brands are built on distinctiveness, and that remains the case. The more you automate, the more time you have to spend with oversight on that to make sure that that distinctiveness remains. We made a tool for o2 very early one of our first tools called the bubble generator, which enabled their teams to sort of generate new images of their then current, now sadly defunct brand mascot, bubble. And we delivered this tool to them, and it was super useful. But at the same time, it wasn't just rolled out to the company, because that would be madness. It was capable of infringing other IPs. It was capable of contravening its own brand guidelines. It might be as simple as something like, oh, bubble never wears a hat, but you can generate a picture of him wearing a hat, and that's suddenly off brand. So yes, the tools are democratised, but no the the right to create, all the permission to create. It probably shouldn't be.

Steph Inez Matthews 25:01
Yeah, it goes back to being a responsible marketeer, doesn't it just generally, kind of, how are you not going to be the one that delivers AI slop? How are you going to make sure that you're delivering quality that's in line, yeah, to your point, in line with your brand values and what you stand for. So it takes years to build up your brand, your equity, if you deliver something that is considered slop, that's considered not transparent, then you've just ruined years and years of kind of perceptions and equity in the brand. So there's the risk there to damage the brand.

Ben Walker 25:34
Tread carefully.

Steph Inez Matthews 25:35
Yeah...

Ben Walker 25:35
If marketing managers want to start introducing AI into their team's workflow, obviously treading carefully, as we've discussed. If they're on a tight budget, they haven't got, you know, 1000s, millions to spend on massive, expensive licences. What sort of low level, cheap and effective tools would you recommend if perhaps I had to force you to name one or two? What would you say?

Steph Inez Matthews 26:02
I would say Adobe Firefly, because it allows you to get access to all of the different models. So it allows you to get access to runway or cling or nano banana all in one place. And I think if you're already a graphic designer or familiar with the Adobe Suite, it just makes understanding and using those tools a lot more easier. And then you don't have to pay for each individual subscription, which can be quite costly, and you can trial it for one month as well. So you can just trial and upgrade to see if you get usage out of it. Kind of test and learn. And then if it doesn't work, you can reduce your subscription.

Peter Gasston 26:36
Yeah, Firefly is brilliant for that. And also because, you know, Adobe has spent so long working with companies. They know what companies need in terms of the IT provision and the service level agreements, whether you're a larger or small company, those things matter. So Adobe Firefly is an excellent choice. This is only a personal recommendation. There are a lot of different tools out on the market. I really like a tool called free pick, which is similar in the way that it just offers you a whole load of different tools all within one suite. It has collaboration and teams. I'm not paid by free pick. I'm a genuine, made up subscriber myself, but there are loads of tools like that. It's really about finding one that gives you the choice, knowing what you want to do with them, first having the choice of different models, ease of use in the interface, and some kind of agreement to make sure that you're covered in the eventuality of some legal question three years down the line.

Ben Walker 27:32
But let's look at that, the legality of it. It's it is a big concern for marketing teams that are using this stuff at the moment. It's big concern in my industry, the media as well. Not only that, when you're using it, but now there's a feeling we don't know where this imagery has come from. People are worried about legal grey areas that, you know, they're putting this stuff out there. They're not sure of the provenance of it, you know, using tools like Firefly or mid journey. You know, how do we get over this as teams? At what point do we involve, or have to involve the Legal Eagles in our company, or our sort of wider sphere?

Steph Inez Matthews 28:05
Well, in my experience, any marketing communications, comms has to go through legal before it kind of exits the department. It has to have their approval or their recommendations, and then it's up to the marketing team to decide if they wish to proceed or not. And it is a grey area. I think it's the EU AI Act, which came into force in 2024 and it's just starting to come into kind of fruition around about now, but it is still very grey. And I think that's why it's important that we look at the being a responsible marketeer, so thinking about using AI in an ethical and responsible way. But you have to be proactive with that. That is being on the front foot on that, that is kind of letting your suppliers know that having a policy about it, it's about putting on your website that has to run through your brand DNA.

Peter Gasston 28:50
Yeah, the founding team at faith came from all different disciplines, creative directors and producers and production and planners and legal. It was very important to have legal on there from day one, because we knew that we were going to get a lot of those types of questions. But yeah, it is. It is a grey area, and the can keeps getting kicked down the road. Nobody wants to deal with it. On one side, you've got, you know, people saying, this is the future of work, and everybody's going to be doing this, and it's going to boost your economy. And on the other side, you have creative people who've produced artworks, and the creative sector saying, or the media sector saying, No, this is going to destroy our industry. And nobody seems to be able to want to deal with that. So it just keeps sort of being pushed down the middle. It's, yeah, it's just super important to to understand your options. This is a very, sort of market dependent point of view. So we work with clients in the Middle East and Southeast Asia, who have a utterly relaxed approach to using AI they don't mind whatsoever. And clients in the States and Europe especially are much more cautious and sometimes even opposed to it. So it's not a one size fits all global solution, but I think if you're a small. Company, you can probably be a bit more risky, but if you're a large, especially public trading company, you do not have that luxury, and you just have to take it very, very, very seriously.

Ben Walker 30:08
What about bias? You mentioned it earlier. Steph, this podcast now been between its seventh year, so this sort of predates the sort of larger AI, yeah, I have a memory in the early days of talking about what was called the sort of Secretary paradox. And this relates to Google Images. So you put word secretary, yeah, put the word Secretary into Google Images, yeah, you got then first page, you got 50 pictures of young women. Yes, I've just done it now, actually, very quickly. While, while, Peter was talking just there, and I've got the same results so many years later.

Steph Inez Matthews 30:42
That doesn't surprise me. I often trial it with doctor or pilot, and years I have seen it improve, but yeah, I just don't think we can rely on these AI tools to be creating bias free output for us, and I think it's on us as marketers agencies to have that kind of human review layer built in, because it starts with the prompt. And actually, I think one AI tool that does it well is pencil Pro. So I used this when I was at wreck it, and they actually had kind of built in prompts. So when you were kind of using the system, it kind of helped you to make sure that you're thinking in a more inclusive way. So helped you along that process, which I thought was a great intervention, but by and large, those interventions aren't built in. So it does rely on a human to be thinking about that. There's so much kind of human nuance in bias and representation, so it's quite hard to have an AI do that for you, for me, that requires a human with lived experience to kind of sense check on that

Peter Gasston 31:42
Bias, I would say, manifests itself in all sorts of ways. It's not only human characteristics, but even if you say something like a man running for a taxi, the taxi, many times, will be a yellow New York taxi, because that's the most famous taxi in the world. If you ask for someone laughing, say, I don't know, a Japanese girl laughing, it will show like a big mouth, open teeth, wide laugh, which is very rarely the way that that's done culturally. It's very different. You know, it's picked up the bias of smiling from the sheer amount of data that it has from us in the West. I absolutely agree that it starts with the prompt, because anything you don't declare in the prompt, it will try to fill in itself. So if you don't declare how old this person is, what race this person is, what gender this person presents as, if you don't declare that, it will fill in the values for you, and sometimes it will do that in a very clumsy way. But I also think there's the hidden biases in large language models, which I think are much more insidious and hard to spot. If we see an image, we can tell immediately that's not right. There's something off about that, or that's not what I wanted. Whereas in text, you could be getting all sorts of bias. We don't know, maybe it's not an issue, but you could be getting that and it's very hard to see.

Ben Walker 32:56
Hmmm. It's interesting, isn't it? Steph, that, at least when we're looking at talking about imagery, yeah, we can see it, yeah, we've got a chance of stopping it. We've got a chance of getting the guardrails in place to stop them, and to change it.

Peter Gasston 33:10
Yes, but you know, as as sort of AI models become more closely associated with national identity and have more national control over them. Maybe they're baking bias into them. You know, do American models give different answer to Chinese models? You know, why does Europe have so few models? Why does Africa and Southeast Asia so the rest of Southeast Asia, outside of China? Why do they have so few models? Are their opinions and their works and their culture and their history being used to train those models? We don't know. It is a big question, and I think it's something that we're going to have to pay very close attention to, which is, you know, the last thing that the companies want is to be transparent about that data, but I think it's an absolute vital requirement that we know where that data comes from.

Ben Walker 33:51
What guardrails do we need to put in place? Do you think Steph to stop that? Because there is a danger. So we end up with everything sort of American centric, Western centric, embedding biases, unless we've got the right people in the right places to stop it and create stuff that is actually more representative of the audiences we're trying to reach.

Ben Walker 34:05
I think it starts with the brief. I think it starts with the brief that you give your agency when the way that it starts with the prompt. Before we get to the prompt, it starts with the brief. So how can you make sure that you're reminding your agency what you're looking for, that. You want it to be inclusive, you want it to be representative, and then it's on them to kind of make sure that that's built into how they create and produce the content, yeah, ensuring that it is checked by human because I wouldn't trust any of the AI models, to make sure that it was kind of bias free.

Ben Walker 34:37
If a department could implement just one non negotiable guardrail. What do you think it would be?

Steph Inez Matthews 34:45
From my perspective, responsible AI, I think because there's no legislation in place, because the llms, it's like the Wild West, we don't know where the data is, I think we can be on the front foot by kind of saying, Well, this is how we're going to behave. Safe. This is how we're going to use the data. We know exactly how that data is being used, and this is our policy, and putting that on the website, sharing that with customers, and I think that's an opportunity and an advantage for any brand that does that.

Peter Gasston 35:15
Yeah, I completely agree. I think it's having guidelines, having a policy, having again, to come back to Faith for a second. One of the things which we started with was five points that we said, we will always follow these and we might review them in a year to make sure they're still valid. But we always want to be ethical and responsible. We want to be, you know, check all of the outputs, all of these things, and I think it's really important to instal that at a company culture level.

Ben Walker 35:40
Are you optimistic it will happen quickly enough and in the right ways, in the right places, so that we can use this stuff for great effect without causing some of these consequences and problems we've discussed?

Steph Inez Matthews 35:52
No, I think there'll probably be a massive industry disaster. I'm sure something will happen that will make everyone stop and re evaluate how they use data, how they use their LLMs, because there's no guideline. There's no governance at the moment to ensure that marketeers and everyone adheres in one way. So people can go off one way and behave how they like on the llms or the AI tools, and there's no one kind of holding them accountable. So something is bound to happen.

Ben Walker 36:23
We're going to fall down a similar pitfall that the legal industry has fallen down in the past. You mentioned some cases earlier, Peter. We're going to have to learn very quickly.

Peter Gasston 36:32
Yeah, completely. And you know, yes, I think the technology runs so much faster than the legislation does, rightly. So I think legislation needs to be thoughtful and not hastily imposed. But I also think a lot of people, a lot of people in the industry even have been so resistant to AI that they haven't taken this very seriously. And if transformation does continue to gather pace, and the people who haven't been thinking about this for three years, that's possibly going to turn out to be a strategic error. Even if you say we're never going to work with AI, you need to understand it, and you need to know the pitfalls and the attractions and all of this. I firmly believe that, because it is a transformational technology, even if you've opted out of it, transforming you.

Ben Walker 37:14
Steph, Matthews, Peter Gasston, thanks very much indeed.

Steph Inez Matthews 37:17
Thank you.

Peter Gasston 37:18
Thank you.

Ben Walker 37:20
That's all the time we have for this episode of the CIM Marketing podcast. If you enjoyed this episode and find it helpful, please consider supporting the show by leaving a rating and review. It really helps grow our reach. The CIM Marketing Podcast is hosted by me Ben Walker and produced for CIM by Bryndley Walker, no relation. Thanks again for tuning in to the CIM Marketing Podcast, we'll catch you next time.

Karen Barnett 37:48
CIM training courses cover all marketing subjects and provide you with the confidence to drive real results. Choose now from our comprehensive learning portfolio on the CIM website under learn and develop the contents and views expressed by individuals in the CIO Marketing Podcast are their own and do not necessarily represent the views of the companies they work with.

Transcribed by https://otter.ai