Marketing {Machine}

For more https://marketingmachine.studio

Chapters

00:00 Introduction and Theme Song
02:34 Apple's Dominance in Technology and AI
05:57 App Intents: Exposing Functionality to On-Device AI
08:27 Optimizing AI Models for Mobile Devices
11:06 The Need for More Compute Power
13:10 Privacy, Liability, and Ethics in the AI Space
25:01 Ethical Implications of AI-Generated Explicit Content
27:01 Democratization of Market Research with Synthetic Data
38:58 The Role of Marketers in Distilling the Human Story
42:36 Platform Optimization and Small Businesses

What is Marketing {Machine}?

This is the official podcast of Marketing Machine. We're the podcast that makes you a better marketer. This goes along with our newsletter. Sign up at https://marketingmachine.studio

Mackenzie (00:00)
the iPhone that's in your pocket is 50 times more powerful than the calculators they use to put a man on a moon.

And you use it to look at Instagram models and play Flappy Bird.

Kerp (00:12)
Okay, I just have to record that into the thing so that I can edit it in with Riverside instead of having to take this out to a second app after we record here. So there's our theme song. what's up everybody? It's the Marketing Machine Podcast. I'm Adam.

Mackenzie (00:28)
I'm Mackenzie.

Kerp (00:29)
Welcome back from the hang in machine shop. Talking about marketing, technology, AI, I don't know, whatever's happening that week.

What do we got? What do we got this week? Do we still want to do intros here? Or just plow into it and count on people to do their research if they care. I'm Adam, I like teaching about stuff. And also we.

Mackenzie (00:50)
Hi, I'm Mackenzie. I like plowing into stuff.

I'm Mackenzie. I've got my combination pit vipers and Rick and Morty lab coat in this studio in sunny Kelowna, BC today.

Kerp (01:10)
I've reached the time of the season where the basement where my office slash studio resides is below the geothermal line. So I need a space heater down here during the summer while the rest of the house is normal.

Mackenzie (01:28)
normal air quotes here in do you want to divulge that information?

Kerp (01:32)
Yeah, Virginia, East Coast, central Virginia.

Mackenzie (01:37)
Saying East Coast, like, buddy, you're like basically in the cancer atropics, right? It's not cool there.

Kerp (01:41)
True. No, it's muggy. It's the summer here for you. It's the south. But yeah, the basement is chilly. And so I leave this funny life of like fuzzy robes and stuff so that I can live in a world of just like, okay, pop this off and go upstairs. Because I got to take care of a kid or something. And it's funny because then I go to business meetings and I'm like, do I just show up in the bathrobe?

Mackenzie (01:55)
Hehehehehe

Kerp (02:09)
Do I put on a lab coat? Anyway.

Mackenzie (02:11)
So I think we would dock it for today. We were discussing it briefly. Apple is running show this week. They have a lot of really exciting updates and then we've got some other stuff. We've got like a conversation that we want to share with you that went down on LinkedIn and there was a leak from Google that we might get to if we have time, but the priority is Apple.

Kerp (02:34)
yeah. I mean, look, feel how you might about their approach to whatever, right? They set a standard for some way of thinking about things or building technological infrastructure through their hardware, which is dominant. so, you know, as much as you want to say, like, okay, fanboy conversations. I'm a shareholder in Apple. I feel like I should disclose that at this point in the size of my holdings slash space talking.

Mackenzie (02:59)
You think?

Kerp (03:04)
publicly about this for disclosure purposes. Investing in Apple has done me well across my career.

But yeah, they're doing it in a very Apple way. So there's a lot of sort of broader topics to talk about, like privacy and stuff. Like really we could get down in the weeds eventually. I got to take a look at it. But like I saw a report talking about how like some of the underlying privacy architecture to make their cloud server thing work is, is, you know, very blockchain esque. I thought that was an interesting part of that. I think one of the.

Mackenzie (03:42)
cool. That is cool.

Kerp (03:45)
one of Apple's people like talked about it. There's like a 250 page white paper about their like cryptographic constructs for how to how to keep this data in a place and have it be anyway. Lots to say about different aspects of that, but I feel like we should back up first. Yeah, so there's been pressure for Apple to respond to the explosion of AI tools on the market on the back of.

GPTs evolving to a certain point, basically. And the idea that you can build in that intelligence as a service now is the thing that I kind of come back to. And I think there was some subset of people that were sort of waiting for a thing that would look similar to what we already experienced. That is, like the chat GPT centralized app sort of.

way of presenting an AI powered tool in this context, an LLM powered tool specifically.

Like of course Apple is able to execute a layer of that that lives at the user, at the OS level. And so it's a completely different thing for them to be able to say, but like I've been saying this for a while in terms of just an evolution, like anywhere there's a text box, you can maybe put a translation layer now, cause that's where an LLM could plug in. So it's just going to show up everywhere in these different ways that are very long tail customized to whatever's.

And like, Apple's already doing so much of that stuff. But not a surprise to me that their rollout is not like, here's our chat bot to compete with chat GPT. It's...

Yeah, we'll give you system level plug -in on this stuff. And over time, we'll get the data to understand how you're actually using these tools and where we need to put whatever.

You recently switched back to Mac, didn't you?

Mackenzie (05:54)
Mm -hmm. Yeah. I've given Apple my $7 ,000 in the last month. So yeah, I've got two MacBooks and two iPhones. And I'm really looking forward to the Mac Studio and an iPad because I want to get my draw on. But we'll see, you know, I'm not a fool and his money are easily parted and I try not to be foolish. So to the AI at the OS level.

Kerp (05:59)
Ha ha ha ha ha ha ha.

Mackenzie (06:22)
The thing that I'm most excited about, and this is what we were talking about right before hitting the record button and getting blasted by that beautiful, beautiful theme song, was the new app intense feature for Apple developers. Apple is currently in a legal battle over being too good a business essentially, which is like they've managed to develop a monopoly by saying like, hey, like we're doing certain things in order to make sure that the quality of the user experience on our devices is at a certain level.

Kerp (06:29)
Hahaha.

Mackenzie (06:52)
And those certain things are, you could potentially say like anti -competitive, but it allows them to do this next thing, which is like, they're, they're not trying to like fix their monopoly issue. They're, they're doubling down and swinging up the fences with App Intents. So App Intents are, a way to expose functionality of a specific application to an LLM at the OS level. So there's, and again, there's onboard AI. So that's really weird because most of our positive AI.

AI experiences are happening at some kind of like supercomputer farm, where there's like thousands and thousands of GPUs kind of computing all day, all night, generating all this stuff. And then Apple's like, no, we're just going to put it on the phone through an over the wire software patch. Right. You can just download an AI. First of all, that's crazy. Second of all, these small models, one of their big problems is that they like don't know stuff. They're not big enough to have like a large kind of.

knowledge base to create like answers and get the magic out of. So App Intense is their solution to that. We'll let the owners of the data, which are the application developers, expose what they need.

And also exposed writing down that data. So, a quote that we saw recently was a woman saying that she wanted AI to do her laundry and, and her dishes instead of like art and stuff like that. And this is a step in that direction. This kind of app intense paradigm of saying, here's all the tools that are available to you, Mr. AI, what would you like to do? Is better than just giving an answer back to a person and then making them copy and paste it into the appropriate Excel cell.

Kerp (08:27)
Yeah, I wrote some notes about this. I'm trying to find them. yes, there's a lot of layers to what is happening ultimately, which is like the, I'm trying to pull back and go in the right order to get to the significance of App Intents. But like starting from just the idea of like to present an answer that it, that the market has been demanding to.

Here's how OpenAI is doing a thing over here, et cetera, right? Which concludes their main competitor, Microsoft, in a lot of ways.

We have to have an answer. They have a whole brand at this point built on a certain level of privacy backed up all the way to like going to bat for users in court to not have to decrypt backdoor things. at the same time, they need to be rolled out in countries where the government demands that. So that exists also,

But I want to chase it through the lens of like you were saying about compute, the idea of like, they're a hardware company. And if they've been paying attention and they've been talking this way for a while, they never said AI, they just always, there's always neural engine, whatever, right? But if you go back, like saying that they already built an AI phone is not inconsistent with the narrative and the real work they've been doing in terms of building systems on a chip.

And so now I have this network that can kind of flip on. And so my question for you from a technical standpoint is like, solving that edge compute problem is constricted by, you know, like that's the best place to do the processing from an energy standpoint. But also it's going to be limited then by whatever that infrastructure is. Compute wise.

That doesn't mean you can't run certain models, right? And so this is sort of like forcing onto people who want to try to understand this system now. The idea of just like, okay, what's the maximum viable model you could run on this device? Hey, why not? Sort of.

Mackenzie (10:41)
There's, yeah, like I think that we're at like a historical turning point and we're gonna consider the last 10 to 15 years as like a dark age of having so much more compute than we could ever possibly use. And there was like a rag, a running joke for a while where, you know, the iPhone that's in your pocket is 50 times more powerful than the calculators they use to put a man on a moon.

And you use it to look at Instagram models and play Flappy Bird. And like all of these engineers were like, like intentionally building like inefficient software just cause it's easy to write. Right. We like, we had so much compute that we didn't need to care.

Kerp (11:09)
Hehehehehe

Mackenzie (11:21)
And so now after this kind of like release from Apple, we're in a position where like, we're going to start needing more compute. We're going to be using 100 % of the phone CPU instead of the like 5 % that we have been doing for the last 10 years.

Kerp (11:34)
And so, so talk to me about the actual capability of those, models. Like, cause you've played with local models and stuff in a way that I haven't really, you know, like I look at reports and have a sort of academic sense of like, okay, here's what Mistral's thing can do. Here's what llama can do running on a raspberry pie, blah, blah, blah. Right. Like there's all kinds of different things working on different aspects of whatever.

To get to the Apple point though, I think the thing is to align that to, okay, so how hard is it to have the best possible image generation model running on device at this point? Which will sort of break down to, okay, just how long does it take for that request to be completed at the level that you care about? That's the part where it seems intuitive to me that they would build a system a lot like they did with socials at one point.

which is just to say, you pick whatever model you want, which is part of sort of that idea of personal context that they introduce, which is like, now there's this fight between how much personal context and data do you want to give up in exchange for an experience where they help you interface with the LLMs. And then here's this whole compliance picture for why this is, you know,

Mackenzie (12:51)
I wouldn't do that if I was Apple.

If it was me.

Kerp (13:00)
It's like making cars. We can keep it safe, but we have to have the rules in place and the standards and blah blah blahs, right?

Mackenzie (13:06)
If it was me at Apple and they're like, put AI on the phone, I would train up something specifically to make use of the unique features that I would have accessible as like the OS guy.

and so that's another reason why I'm so excited about App Intense because I feel like that's what they've done. Right. Like, so if you're running an LLM locally, if you're running like a Lama three billion parameter, first of all, you still need like eight gigs of, VRAM on your video card to like run it at all. Cause it has to load up all of the nodes in the neural net at the same time. That's what all that like memory is for. And you can get away with this if you're okay, like waiting a long time for each token to come in. So I think what they've done is like optimized for minimal kind of token

I think that they're going to be calling other AI. It's like the main onboard AI is going to be calling other things for if there's a heavy generation task, like translate this or rewrite this. That's probably going to be happening off board. But for tasks like what appointments do I have today?

And which one do I have to care about? That's something that a local LLM could, could probably easily handle because LLMs like the big magic trick of what's going on at open AI is it seems like it knows things, but it honestly doesn't. It's, you should think about it as a linguistic calculator that transforms some language into some other language. So here's a list, transform it into a list ordered by what matters the most to me is a much more appropriate task for an onboard or like any LLM honestly, then something like what are the most important.

tasks that I could do today. Interpolation is easier for it than generation. Does that make sense?

Kerp (14:43)
yeah. Yeah. and because of that, the part where it, you know, like you're saying, basically you're saying it's easier for it to do personal context stuff where it's sort of like, okay, I adjusted the entire entirety of your email communications. Now I'm going to remix some stuff is a different sort of operation from, okay, now generate new things of that, which is interesting, but you're right. So yeah.

Mackenzie (15:11)
Plan me a business that will make a billion dollars this year is like a common thing that people give to AIs and it's not that good at that kind of thing.

Kerp (15:14)
And so then.

This is the part where it gets to, I think, what I've heard other scholars call, like...

Mackenzie (15:25)
other scholars.

Kerp (15:26)
Other scholars. Academics. I don't know, we're talking about it just for the sake of talking about it. That's good enough. Good enough for Socrates, probably. Yeah, that's right. That was an appeal to Ancient Greek Academy. What the fuck was I talking about? yeah.

Mackenzie (15:28)
as if I'm in that set to begin with.

Sure, sorry.

Hehehehe.

according to other AI scholars.

Kerp (15:55)
yeah, OK, that's what they would call long tail customization. Like this idea that there's this customization layer for different applications that can happen at certain things that I've been talking about for a little while. And there's this tension between is that a thing that wins versus a generalized model that you can just hit with this request with enough specificity and it will be able to do it. And.

Well, for one, I think it's silly to act like that's an either or it's a both at the same time for different reasons. Answer. but that's the part that gets you to app intent because now we're going, okay, if third party apps want to be able to provide a personal context, here's the construct that makes Apple feel safe about that interaction is a game where Apple can win. It's why the app store one, right?

Mackenzie (16:31)
contextual.

Kerp (16:54)
What I think is interesting is, is we start talking about intents. It really breaks. It's an example of the LLMification of everything to an extent, which is like, we're having to build the bridge between here's what the code for this button says, and here's what we intend to have happen when you click that button. And that starts to get into lawyerville. Cause that's like,

Mackenzie (17:19)
Hehehehehe

Kerp (17:20)
Okay, let's how would we memorialize this in a contract among humans using squishy language where they all misunderstand the words and we're going to do the best we can and fight about it in court later. If there's an actual conflict like that's lawyerville all of a sudden, which is not to say that they're like fraught or anything. It's just interesting to see it hit that layer of like asking the apps to articulate. Okay. But in natural language, what are you trying to do here?

so that we can build an API endpoint that maybe works for you slash you can build it yourself to be able to have the knowledge and ordered data that Apple has about the contents of my email addresses in a theoretically privacy compliant way.

Mackenzie (17:57)
Mm -hmm.

Another important thing from their communication about this, and so this is the email that went out this morning on App Intents, was composability. So you could layer these App Intents on top of each other in different orders to get different results. And that's also something that the LLM would be capable of doing. So on the fly composable kind of atomic operations between all of the apps that you use for work and pleasure.

I hope that Tinder exposes a send message, I intend. I would love AI to do it for me and we could vindicate Zizek.

Kerp (18:48)
Yeah, that the composable ecosystems thing is one of the things that I expect out of this white paper to be like, okay, this is what happens when you hired a bunch of blockchain engineers when that thing was happening, because I don't know, there's something real here. And now they're in there going, yeah, this is how we have to solve that problem in a possibly zero knowledge way. Like lots of those projects still happening going in the direction of

Mackenzie (18:54)
you

Kerp (19:17)
I do think what they're talking about is feasible technologically. We trust legacy systems to all kinds of information about ourselves in banking context and significantly regulated contexts. yeah, composable ecosystems, that gets to the interesting thing that's always.

where I want to push it, which is like, okay, where's the more risque stuff going to live and what's their mental model for that? Because right now it's, Apple's not going to stop you from going to a website that contains pornography. That's the portal net currently through which Apple is selling you porn, or at least horny things.

Or email, I suppose emails another conduit because people are getting newsletters. This opens up this other thing, which is, okay, is there a context where you want to be able to open up a portal that is not subject to X, Y, and Z limitation under the law, blah, blah, blah. Like you can see them doing the same strategy that Apple has effectively done in the past, which is to say, okay, where are the plugin points where we can dump off liability for this aspect of that?

And so like the cleanest thing on a lot of stuff is they actually mean this. It's cleaner for them to be like, it's encrypted. We can't do anything with the U S government, for example, than it is for them to try to comply with backdoor things that become massive security risks.

Mackenzie (20:50)
I'm imagining a more mundane failure case. Like, let's say that I give like a general instruction to my local LLM to like deal with my emails and I get an aggressively worded marketing email from good old pit viper. And, and they do this, but they're like, I'm going to, somebody's going to shoot my dog if you don't buy sunglasses right now. And the LLM would be like, no, please save the dog. Here's my user's credit card.

Kerp (21:14)
Ha ha ha ha.

Mackenzie (21:15)
Like, cause it can, the app intents are all there, right? Like open the link, open the store, add the payment information, submit the purchase. All those app intents are definitely going to get written in. So there's a question of like autonomy, right? Like how often do we want to be approving what the actions are? Because if it's something mundane, like respond to my emails, right? Like let's say that I'm working help desk.

Kerp (21:17)
Yeah, right.

Mackenzie (21:36)
And I get emails from like sales staff that are trying to like circumvent my ticketing system. I just want to respond to yes. Right. So like, what's the limit? How do we compose these things so that it's not. Composing stuff that I actually don't want to do. Right. Like.

Kerp (21:49)
And some of that is the monopoly lawsuit questions that you were talking about. So like some of this is just, it's going to get duped out in court. Then some of it is just the inevitability on another side because of open source. So like there's momentum behind a lot of this stuff happening anyway. And then yeah, for me, it just gets to those funny questions of like, okay, so.

If the idea here is currently they're building in the ability to dump out to open AI, if you can pick from a, you know, whatever is one of those, you know, porn hub. And are we close to a model where you can, cause this is, this is a crazy thing with generative AI that's already happening, right? Like they're giving you a local as they want to, you know, like as they can make it version of, Hey, generate a picture of this person for their birthday card. Right. That means you got to look at that person and make a cartoon of them.

which is a thing you can do currently, but some of the models are limited to be able to do that. And then the next one past that is, okay, now that you can generate fake pictures of the likeness of people that are contained in your photo, you know, your camera roll or whatever.

Do they let you do that or not? Do you have a right or not? Like that's already starting to be sort of coded as a norm in two models to align with different laws or different moral, you know, sort of code, however falls wherever you are.

Mackenzie (23:27)
That's why I kind of like the server model because somebody is responsible, right? Like if you, so for example, with like the AI music things that are happening, I am a supposed failed musician, but it's something that I like keep around as a hobby. And one of the requirements is that if you're uploading something for an AI to interpolate into more music, you have to have the rights to it. And so you sign a waiver. I have the rights to this. If it turns out that you actually don't through human review or automated review later,

You lose access, right? So like shipping it to the phone, are we going to have granular access on the phone and Apple's like flipping switches and taking away our AI here and there? What does that have to do with like if I'm using my device for work and then in my off time I'm doing something maybe I shouldn't be doing? Is it like, are we entering like a world where like corporations are judge and jury over somebody's ability to make a living in a digital age?

That's weird. an additional anecdote, a woman I was dating recently asked me to make pornography of her. She like didn't want to do the acts, but wanted those artifacts for commercial purposes. And to be honest with you, I couldn't figure it out. So it's, it seems like something that people would want to do if they were particularly motivated. I wasn't. but even in like most things like.

I don't have to go out and find out how to do this, right? Like people will come to me and say, hey, you can make AI music. Hey, you can make AI art. You can make AI videos. Like it gets distributed through the world. That kind of feature is either for very serious individuals or it does not exist.

Kerp (25:01)
There's other weird, interesting precedent for this. Like Photoshop won't let you work on pictures of money.

Mackenzie (25:08)
Hehehehehe

Kerp (25:08)
like, so some of this already kind of happens at, at, at the software level. I, I'm, I like, I don't have answers yet, but the, one of the things I'm interested in is that philosophical layer of just like, okay, if your partner doesn't care and they're like, yeah, you got pictures of my face have at it. Like, is that weird? Should Apple help you with that? Or have you plug into another thing that does help you with that? Or I don't.

Mackenzie (25:36)
Or like, what if it's my business? So there's an argument of harm reduction,

Kerp (25:38)
You know.

I realized that I never finished my thought about the basement. I mean I did because I talked about the robe when it started because I'm talking about I don't have my pit vipers but I do have this stupid cycling hat which is a really good solution to bald guy basement in the south problems.

Mackenzie (25:58)
Hehehehehe

Kerp (26:01)
I need some fabric, but a whole ass hat is too much.

Mackenzie (26:06)
No, I think your hat is a good middle ground. It's a conservative article.

Kerp (26:16)
Anyway, what was the last one?

Mackenzie (26:20)
We finished on Apple with their kind of terms in use, might get in the way of activities that are both distasteful, but also harm reductive. And then we were going to move on to the synthetic marketing data.

Kerp (26:33)
Yeah.

yeah, so moving on from that one. The second one we talked about was LinkedIn posts talking about marketing research and some chatter that happened there in the comments. We'll link to it in the show notes. But it was about the idea of synthetic market research, right? The idea that

you know, firms that go out into the market and do surveys to try to provide data to help make business, to help find the answer to business questions. Part of that is what I in this conversation ended up calling simulation, but it's like it's going out and doing market research.

to try to say, okay, we think this message or this product or this whatever resonates. And that's kind of the thing where above a certain budget in marketing, like you maybe have access to it or not. And so one of the things that came up was the democratization of market research. But then the question is, okay, does synthetic market research, however it's pieced together out of AI parts, now that we sort of have this like word prediction engine, LLM tool to use.

Yeah, there's some some some back and forth, but you made a good point.

Mackenzie (27:59)
I took like a psychological approach when I was studying marketing rather than like a data -driven approach because I couldn't afford data, like a lot of other people, but I could afford psychology textbooks. I could learn about coping mechanisms and trauma and emotional responses and stuff like this, which it turns out is like a lot more instrumental in decision -making. Like data helps you find the weird edges of the general trends that psychology would lead you to believe are there.

People make purchasing decisions or the lack thereof, mostly to avoid feeling stupid. Like the sentence is literally, I'd have to be an idiot to buy this or vice versa. I'd have to be an idiot not to buy this. so there's like a complex range of like shame and like social interaction and stuff like this that's going on in a purchasing decision that I think that data and a little bit obfuscate.

And then also because it's shame -based, standing up a synthetic avatar or collection of avatars to ping potential ad messaging against, I think isn't going to work because people aren't going to publish the things that make them feel ashamed. Like I just, I can't see where you would get a primary source.

that would support a realistic model of human behavior because it is dark and messy. And the things that people would post. So again, in this original discussion, one of the former kind of directors of some information Silicon Valley word at LinkedIn was putting this together. So he had seen the primary data set of like, this is how people behave on LinkedIn. But the way that people behave on LinkedIn is not the way that they behave when it's like seven o 'clock at night and they're buying a lab coat on Amazon.

Kerp (29:40)
Ha ha ha ha ha.

Mackenzie (29:41)
I just, so my question is like, how do you get, how do you, in like an LLM to ping things off of, or any other kind of AI, how do you, how do you know that the transformer actually is modeling what a person would do? And so one of the answers to that is data, like Kirk said, like we can put together.

I put out this ad and this cohort of people did this. We had these many clicks, these many conversions, this much watch time, all that stuff. This ad got this much stuff from that same audience. This ad got this much stuff from that same audience. So you can start to like build something that way, but that's not really interpretable by an LLM. And it's not helpful for like rapid testing. Do you see why I'm like suspicious of this like synthetic data?

Kerp (30:27)
Yes, and there's a few pieces to pick apart in terms I think of piecing together like okay What's useful for a marketer trying to figure this out right now? Which is there's

synthetic data, which is the idea that this is producing data that you were going to use to do other things that will be dependent on that data set, like down the road, right, which is a higher level way of looking at it in terms of the structure of like the whole machine. A step back from that though is, so like,

You can already kind of do this if you have the budget to run ads just to drive traffic top of funnel. I just unplugged my monitor or headphone. And you can start to in kind of very concrete ways, test like, hey, does this way of saying it work better than this way of saying it? And so there's a...

step back from it that gets me to like, okay, what are you needing to replace in the system? And what level of MVP of providing the value of that function in a business context, like is good enough. And if AI can help at that layer, then people are going to use it to.

do better at business and try to understand these things, right? So picking apart the idea of like, okay, what does this market research look like? Comes back to an interesting thing. It's just sort of like, okay, is it, but is it better than the alternative viable solution for this? Which in a lot of cases is like, no, I do not have a hundred thousand dollars a year to pay to a marketing firm to like, you know, get access to their research on this where they've talked to 10 ,000 people.

Mackenzie (32:07)
I hope they do.

Kerp (32:33)
But also 10 ,000 people is not that many people in a supermassive statistics world, which is where this starts to get like interesting to me, which is that idea that like Google knows if there's a flu outbreak happening because people search for more medication and it's, and it's the margin of error is real squishy because of the intent signal and the size. But within that margin of error, they're confident enough to be like, yup, flu outbreak in Philadelphia. It's a data set you can get from them.

Mackenzie (32:42)
That's another point.

Kerp (33:02)
I think on some level. So like, it does seem to me like there's a way to start building a data picture, which marketers already do. It's what all the data brokerages are about. And then align that to like, so it sure does feel like conceptually some version of this thing that's just better than doing like regressive analysis on data sets about demographics and firmographics and typographics and psychographics. And like you can buy so many.

Big ass data sets around that kind of information. So I back, I ended up backing up to like, okay, campaign polls, right? We talked to 5 ,000 people in a geo -fenced area on the phone. We asked these six questions. We recorded their response. Here's the sentiment analysis score, right? Like this really seems like the kind of thing that we just spent the last half hour talking about being able to do on some level, on some version of hardware that deploys.

Also, we just talked about like, that's going to be better at this than what we have as an alternative, which is literally talking to meet humans. And then, so like, you can see where I think it's easy to see how the synthetic data becomes problematic. And they, you know, there's that idea of data collapse, right? Does it get to be crap at it and just useless because it's bland because of data collapse or does synthetic data actually drive in a direction where it's like,

Mackenzie (34:14)
I feel like there's different kinds of...

Kerp (34:29)
Yeah, but...

this has been effectively simulating X thing. So there's an interesting place where to get real wonky with it before we run out of time. Like you can start to imagine using all of those data sets to build out a thing where you just say, look, here's all, you know, but just the lens I use. And I don't know if this is the right solution is like custom GPTs. The experience of making one of those on open AI right now is literally just send a paragraph of text to the AI and it.

And if you take the time, you go back and forth to fine tune it, but you could do it in one load and say, spin up a new GPT to match this. And so like at what statistical, if you hit 200 ,000 and it's based on every single one is an avatar for a cluster of data that we have about people because of census data and stuff like that. It sure seems theoretically feasible to me that you get 200 ,000 respondents that represent the 200 ,000 people in this town.

to the extent that, you know, political analysts are able to like call that down and then say,

Mackenzie (35:37)
Yeah, so like MLM, LLMs are like just stalled, right? Like when they're answering questions, they kind of always shoot right down the middle as much as they can. And so this is useful and it is better than primary data, strictly in terms of cost, strictly in the context of something like a political analyst or something similar, like an FM CPG or a CPG.

Kerp (35:45)
Right.

Mackenzie (36:00)
or something like that. Like me and all my boys go spearfishing, right? Like we don't, we're not net fishermen. We don't say like, well, we have an audience of 7 million people and like we're done. We pick markets that have like 700 ,000 globally at the most. So.

That's just like the experience that I have. Like for something like that, I don't see that. So like a case is my friend's like golf ball printing. If he wants to test ads on customized golf balls, he's going to have a really hard time building the ideal avatar within his like, he's, it's a local business, right? Like his competitive advantage is the speed to deliver the print. Obviously anybody in the world can get their print from China or Pakistan or whatever it is six weeks from now for 40 cents, right?

But if somebody has an event in two weeks and they forgot to invent a time machine and put in their order six months ago...

then they're going to go with Mitchell, but that only makes sense where he lives. So I don't think that somebody like that, who's like very much spearfishing niche product, niche audience and restricted geographic region. I don't think he's going to get a lot of benefit out of synthetic data, even though he's the kind of small team that would not have the budget to get the primary data in the first place. The teams that have, are able to, get the most out of this synthetic data are the teams that can afford the first party data. Do you have like a counter case to wrap up?

or as we move towards the end here.

Kerp (37:25)
Right. Yeah, I think the it doesn't mean it doesn't inject it into that set. It's just maybe it doesn't look like sort of the models we've tried to to squish it into so far. The reality, though, is if inside of marketing systems, even if it's just in Google optimizing their ad interface, right, being able to like

This feels like a bridge layer that gets us to, if only from the expectation of the user, like some layer of this stuff that has previously been interface decisions made by like UI UX designers as part of a product team that do their best to do research can get closer to being this sort of like customized idea where it's like, I don't pick the CTA color buttons because the interface does based on what it knows about you and like what button you would rather.

click on and it starts to turn into like behavior design and other ways of thinking about aspects of marketing that like bridge more into video games and and like ways of thinking about that kind of stuff. And then that hits small teams because small teams are like, I don't I don't need to think about the color of the and this and turns into the platform bridge, right, where it's like, I just left Facebook handle it and work comes in locally and I don't have to think about having a website. A lot of businesses run that way. Right.

Mackenzie (38:54)
You know, whose ads like kill it is like when you're reading like a local news site and it's like, this woman is making tons of money at the casino. Click here to find out how those guys are just like, they look like a real news story to like the native advertising is on such a next level. And it's like what you're saying, where you're talking about, like what they're doing there, like making it look native and dialing it in for the audience everywhere on autopilot is what you're thinking this is going to turn into.

Kerp (39:11)
So I think in the small...

Probably like if the idea is you're gonna teach a Jarvis at some level of security and comfort and edge compute about how you want to move through the world like it's gonna start to meld in a way that feels like Yeah, why not I don't care what color the button is and if they are giving me a more pleasant experience even if that pleasant experience is of consumerism

Like, so...

think it ends up working its way in there in terms of like, what does it mean to have a website for your small business? It means a lot of stuff is just handled for you and optimized for you. And that's awesome for you. So like the democratization part is less about the idea that your buddy you're talking about is going to use it to simulate business decisions about the golf ball business. But they may make it way easier for him to

Mackenzie (39:55)
Yeah.

Kerp (40:20)
just forget about aspects of it that have nothing to do with it. That when you're a small business owner, it's like, I just like printing golf balls. But I spend most of my time like this has happened to me as the owner of a small business a number of times, which is like, eventually I get to the point where I don't get to make videos anymore because I am busy running the accounts to make enough money to pay for the people to have this three to five person agency like situation where it's not that I want to.

Mackenzie (40:32)
I see. Yeah.

Kerp (40:49)
do my own marketing, even though we're a marketing agency, it's that I have to do aspects of it. And then I have to do sales aspects. And so there's pieces of that where it's like, that sort of, I think, brings it full circle to what we were talking about earlier in terms of the part where you want AIs to do your laundry and stuff. There are business versions of that where it's just like, I just want the marketing to happen so I can sell golf balls. I got into this because I enjoy hanging around the warehouse, working with a small team.

Printing shit on golf balls.

Mackenzie (41:21)
So like what you're thinking is that the synthetic data is at the platform. Like Mr. Golf Balls walks up and he goes, hey, make me an ad. And then the platform Facebook goes, I know who likes golf balls and I know what they like. So we're going to use this marketing data to like build something.

Kerp (41:37)
They like turf green buttons because they're reactive to them, whatever. And just Facebook will handle it. It'll be built mostly in ad platforms and stuff. But like, I don't, that, that's why it gets back to the art piece, right? Cause it's like, the answers start to get really in the space of just like, like art world bullshit, but it's like, because we are the taste makers.

Mackenzie (41:46)
So then what are we going to do as marketers?

I'm going to go.

Yes.

Kerp (42:06)
We are the distillers of the human story into a relatable narrative in a way that the machines don't quite do yet such that there's a human -ness to the part where we've optimized the part where once you're like, interesting, we're going to push you down the funnel using optimized means. But we still got to get you to pick up the, it's like, there's almost an analog -ish analogy, which is like, we got to have a badass cover on the video game or you're not going to pick it up off the rack.

Mackenzie (42:23)
Mm -hmm.

Kerp (42:36)
Best Buy. Still that layer and it's an ongoing thing like will that get consumed or not. Anyway, we are out of time for this week. As as usual, we agreed to talk for 20 minutes before we started doing this. And here we are at 45. But that's fair as long as another hour. It's a podcast. Thanks for hanging around, everybody. This has been the Marketing Machine Podcast. If you want to subscribe to the newsletter that goes with this.

You can go to marketingmachine .studio and find all the links. I've been Adam.

Mackenzie (43:14)
I've been Mackenzie, thank you for listening.

Kerp (43:16)
Take it easy, everybody.