Always Be Testing

Guiding you through the world of growth, performance marketing, and partner marketing.
We sit down with growth and marketing leaders to share tests and lessons learned in business and in life.

Host: Tye DeGrange
Guest: Dominic Williamson
Hype man & Announcer: John Potito

Timestamps:
00:06 Introduction
00:50 Austin Events Buzz
01:58 Analytics Expert Intro
03:45 Geo-Level Targeting
05:49 Geo-Level Holdout Tests
08:34 TV Advertising Challenges
11:13 Digital Channel Flexibility
14:12 Marketing Spend Insights
17:25 Evaluating Media Effectiveness
21:01 Bridging Finance and Marketing
26:16 Advanced Media Measurement
33:12 Third-Party Tools Analysis

What is Always Be Testing?

Your guided tour of the world of growth, performance marketing, customer acquisition, paid media, and affiliate marketing.

We talk with industry experts and discuss experiments and their learnings in growth, marketing, and life.

Time to nerd out, check your biases at the door, and have some fun talking about data-driven growth and lessons learned!

Welcome to another edition of the Always Be Testing podcast with your

host, Ty De Grange. Get a guided tour of the world of growth, performance

marketing, customer acquisition, paid media, and affiliate marketing.

We talk with industry experts and discuss experiments and their learnings in growth,

marketing, and life. Time to nerd out, check your biases at the door,

and have some fun talking about data driven growth and lessons learned.

Hello, and welcome to the next episode of the Always Be Testing

podcast. I'm your host, Ty DeGrange, and I am really thrilled to

have with us today Dominic Williamson. What's up, Dom? Hi. Hi, Ty.

Good to reconnect. Yeah. Absolutely. I'm glad to have you. And and so thanks for

joining us. I'm, I'm here in Austin, Texas. It's a exciting week

here. We have f one coming into town. We have a

marketing land event here tomorrow, and then there's an all in

podcast hundred and fiftieth episode gathering in in town. So there's,

like, all kinds of fun little things popping up. I actually as I walked into this,

episode, Bill Gurley, I heard his voice, while they're doing startup pitches here at the

Capital Factory in Austin. And I my ears perked up. I'm like, wait a second. Is that Bill

Gurley? And it and it was. So we're he's just about twenty

yards away for the startup fanboys out there. Sounds fun. I'm in San

Francisco. I don't think there's quite as much going on this weekend, but, it's,

it's it's very this is the nicest time of year from from a weather perspective. I know you you were here

before, but, yeah, you you kind of, during the summer, it's cold, and then now now

it's unseasonably warm, so it's it's nice. I love it. Yeah. It's a great time to be in the Bay

Area, for sure. It looks like it's been beautiful. Well, I'm super excited to dive

in with Dominic today. He's such a seasoned analytics pro.

So you and I were on the same team, larger team in Internet marketing at

eBay, and you've led strategy and analytics for Facebook,

for FanDuel, for Compass, for Instacart, like, an insanely

awesome resume, and very accomplished. So amazing to you and

excited to just dive into all the all the fun data things, today.

Cool. Looking forward to it. Absolutely, man. Well, cool. Like, maybe

super basic level, when you kind of break down what you do in

strategy and analytics, like, how would you kind of just describe it to a fifth

grader at a basic level? Well, my son is a fifth grader, so this is not, not

entirely theoretical. I essentially look at the spend when, when people spend on

marketing and and advertising, media in particular, I investigate that with different

techniques to understand whether or not it was worth it. And then the step beyond that is

to make that investment better. So where are the areas where we could invest more? Where could we invest less? And then within each

of those channels, how can we do better? So it's really making sure that the money that people spend on

marketing is as effective as possible. And it's it's but it's mostly from that objective viewpoint. It

It always sits somewhere between finance and marketing to to say, well, here's the money. Here's how you spent

it. Was that a good idea? Where else can we spend it? Yeah. That's really interesting. This

is such a such a great one. What are some of the big, maybe, learnings

you've you've had? And you don't have to necessarily name the exact name, but, like, what are some

of the course corrections you've seen at maybe the macro level? Or what are some of the,

suggestions or recommendations that have come out of those, hey. Was this spend

valuable or not? Yeah. I think some of it won't come as a surprise to you at all. I I

think the big piece was, and this is not a secret, but but at eBay

doing geo level targeting to understand what the impact was. The struggle that you

have with attribution is that it's so nice and convenient, and it gives you exact numbers. And you can look at them every

week, and you can draw charts, and and you can present your reports. But the

underlying question of, well, what is this really showing us is the one that I think testing

is is the answer to. And I think, as you know, testing is key to

understanding all of these the the true incremental impact of everything. And I do think

geotesting at eBay was a was a really big starting point. And I think at the time, it was very new for the

industry as well, the idea of looking at something that had been traditionally measured through last click

and and pretty much last click alone and taking it from an entirely separate angle, which essentially

ignores all of that information and says, well, what about these geos versus these geos? What's the difference

there? So I think that was a a really big piece in understanding that things could be

done in a different way from a measurement perspective, but then also, obviously, in an operational way. I think, it

was the asset test of whether or not spend was operate was as effective as we thought it

was, and and that was a a great learning there. And I know that's it's in the public domain now, the

the geo level tests. Steve to Davis and and team published that. So it's it's one that I

think I do see on occasion still being brought up. But I think the the whole industry has moved on from that now. That

was the great first step, and then everything else has been building on since then. But I think that was

the the first real eye opener, I think, was that. Yeah. You were kind of part of that pioneering

wave, to be candid. And I think it's something a testament to you and a lot of your

teammates and the contributions you've made. If you don't mind, like, for folks that are maybe

less a couple steps removed from some of these incrementality tests and the

measurement of marketing and seeing if it is valuable or not. Is it safe to say

that the geo level test is a type of holdout test? Is that correct?

It's a type of holdout test. And the nice thing about it is that it's also just a campaign that

happens to be on in a certain geo. So in that sense, it's very transparent and

intuitive for people. And it's a it's a great way to introduce testing because

everyone understands this idea of, well, if this geo is on and this geo is off, I should expect to

see an impact at the geo level. So, yes, it's it's it's kind of like a user level holdout

if if the users were geos rather than people. And then, obviously, that comes with its drawback as well

because there's a lot more noise when you only have two or I mean, two hundred geos that that you can be using in

the US, but that's a lot less than the millions of of users that you might have. So it it loses

something in in terms of readability, but it gains a lot in terms of intuitiveness and

also just being able to operate against it. You can't actually always operate against the user level because

you don't always know exactly who everyone is and who's seeing what. You usually do know where they

are, and so it does make it easier to to kind of execute. So as a rule,

I would always prefer to do a user level holdout test, because you just you control for the

most things there. But in practice, geo level tests are really helpful and and often the only

option for for some of these things. But, yes, to answer your question, it's a holdout test. The the holdout

is is an area rather than a set of people. Well, it is a set of people, but it's a set of people in in its own

area. And for those not aware, it helps evaluate if there was

a a measurable lift in the geo that received the ad versus not. And

so, therefore, the brand can go, well, this worked or didn't, generally. Correct?

Exactly. So you have your you have your let's say you're you're using the US. You

target a certain portion of the US, and and maybe it's at state level, maybe it's at DMA

level, and DMA being kind of the the smallest level that you can execute a TV campaign.

And maybe it's at those levels where you split them up and you say, these people will see it, these people won't, and you expect to

see a lift in the area that that did see it. So it's a it's a great way

of doing a test on the sly because you could just call it a geocampaign.

The difference between it and an actual geocampaign is that you want to, as as much as possible,

randomly select you at geos rather than select the select New York and

San Francisco and LA because they're the big ones, because that's that's going to skew it. But, yeah, it's

it's a nice way of of working with a team, a marketing team in particular

to to ensure measurement, but but not make it measurement first, let's say.

Interesting. That makes sense. And then you're you're am I correct to assume you're kind of

selecting geos that have characteristic similarities? How do you make sure that those

aren't skewed to your New York point earlier? Yeah. There are different ways. So in

theory, you want to randomly take, you got two hundred DMAs, let's say, you want to

randomly split them in half and use them. In practice, that doesn't really work because there are so few of them

that you rarely get a great match that way. So you have to do a bit of it. It's called, you can

use stratified samples. So you can say, well, these here's a group of kind of tier one and here's tier two and here's tier

three and then kind of randomly sample within that to make sure that you do get a good selection. You're

often working with, with the agency or if you are working with an agent, but you're often working with the

buyers to say, what can we actually do? It's not always possible. And that can

sometimes I'm I'm thinking more from a TV perspective right now, but that can sometimes

impact. It's where they say, well, we've only got inventory in these five places. Like, this is gonna work. And

there's always, I think, a a line between doing something because it's a

a practical thing to do and doing something for for the purposes of measurement later. And

and that's I think part of my role is to make sure that we're not just forcing it into

a measurement structure that that actually makes it less effective, but but we can find that happy medium

between the both where we're spending. We're we're investing wisely from

a how big an impact can we have point of view, but we're also doing it in a way that that's measurable.

The geos is a big question there because a national campaign can on TV again,

can cost about the same as seventy percent of the country. So if you're doing a thirty percent holdout, you're just losing

thirty percent of your reach. And so you have to kind of get that right balance of, do I

need to measure this? Do I want it to have as big a bang as possible picking the right path? And

it's not, you can't walk both paths simultaneously. You can't have as big an impact as

possible and as measurable an impact as possible at the same time. So how do you structure it to make sure it's

the right place? I love that sentiment because I think oftentimes brands and marketers

get really ahead of their skis and excited about Mhmm. We're gonna go really

ten out of ten on attribution and incrementality, and they don't always

recognize, like, what is the cost of that in media, in hours,

in in time, in limiting that reach a little bit. Right. And then to your

point, is that investment actually giving them the long

term and short term return that they're asking and wanting? For example, if it's a product

launch, do you need a holdout? Are you going to do this ever again? Do you need to understand

and replicate? That so you might not wanna hold out at all in that situation. But but, conversely, if you're doing

something where you think I'm I'm gonna keep doing this every three months for the next three

years, I should know what what it's really doing, and I should have a better read of of how much return I'm

getting. So I think I think those strategic questions need to be answered first before you then design, how do

I actually execute this? I love that. You kinda reference TV, you know, a fair

amount in that, you know, example. Do you find that when you're

kind of counseling with performance marketing teams that maybe other channels

are similar in their ability to do those types of holdouts and lift tests.

And I'd be curious to hear what channels you like to like to do that with and

which ones might be more difficult. Other channels are usually better. The reason I keep mentioning TV

is is it's pretty much your only option. If you if you are doing TV and you want to hold out, geo is pretty much your

only option. I think with other channels, you get more flexibility in terms of what you can do.

Every digital channel generally gives you the option to to have a geo targeting and therefore

geo holdout. And often, there's no real cost in terms of, loss efficiency

if you do geo target. So the the seventy percent I mentioned before is really for TV. If you do that

in in digital targeting, it tends not to be an issue. So, so, yeah, I I think other

channels lend themselves even more to this type of measurement. Mhmm. There can be a struggle

in every channel of individual user level targeting holdouts

because you need to have a very clear, consistent view of who is who.

And that's easier to do if it's which geo is which geo versus which individual is which individual.

But obviously, it is possible, but it's just it's just harder. So it it depends

on on the channel, but I would say every digital channel lends itself to to geo

testing, and and a lot of them lend themselves to user level testing really well as well. That's awesome.

I have to bring it back to the fifth grade kinda macro case,

here. Is that your would be your son in this case, you mentioned? Do you have a Yeah. Fifth grader?

He's at school right now, so I can't bring him in to to give his view of what I do. But, but

yeah. On the next episode, I think. Yeah. I don't know. He'll be he'll be taking

people's jobs soon, the next generation. Hopefully, that's one way

traffic economically right now, so I hope you get to know.

So for the fifth grade perspective, when you're when you does the question ever come up,

does marketing work? And so, obviously, it's a very blanket question, and it's one that, you

know, you're kind of at the core asking yourself and your colleagues. But most

often, does it seem to be working, or are you kind of like, don't don't do it?

It definitely works. And if if work is means do people buy things

more because you have marketed them? No no doubt that that is true. Does it work well enough to

justify the investment? That's the real question. Right? That's the crux of the question, and that varies

a lot. There's the classic quote, right, that that half of my marketing dollars work and I know which

half. I think now we we understand which half better than we

ever did before, or at least we're we're certainly building out that that understanding a lot better. I

would say that a lot of marketing spend is is not driving as much

response as the cost of that spend. I think that that's fair. I think there are

broader impacts of marketing as well. So in terms of brand building, there is that

kind of nebulous impact that that could be there for the future too. But I do think people could

almost invariably be more efficient with their marketing spend. And, and I think there's there's

a real push to make sure that happens. And I think it's certainly growing as a

in terms of maturity, people's ability to to spend the the dollar in the place that does

give them the right return is is improving. Yeah. I I love that, and I would say that we share

that enthusiasm. I I there's very few brand advertiser campaigns

in our world of performance marketing where I'm not really excited to look under the

hood with my colleagues and go. How can we save you money? How can we think about

ways to spend the same or less and get the same result? Mhmm. Or conversely, if you're in growth

mode, how do we scale up to spend thirty percent more

and get eighty percent more value? So I think that's the fun of

what we get to do. And and while there's creativity involved in that, obviously, there's a lot of

data and analytical work that you and your team are very steeped in that that is at the core of what

you do. Maybe a transition with that, does marketing work?

Like, what are some of the myths that you find in analytics in your field and

that, you know, maybe even, like, your trained colleagues might come up against every once in a while? What are

some of the things that you wanna debunk for the audience today? I think there are

certain hand wavy pieces that that always end up with us erring

on the side of being generous. And so by that, I mean, there's people will say, well,

the the rising tide floats all boats. So maybe this has impacts on on other things, which I just

mentioned as well. I get I think it it is genuine, but at the same time, I think we have to quantify

that because, otherwise, we have a tendency to just use it as a as this theoretical

extra value that that's never been captured but somehow makes everything justified. I think you see that

particularly with brand campaigns as well where we say, well, look. We didn't see it. We didn't see the sales right now,

but maybe there is this long term impact further down the road. Maybe there is, but but we

should we shouldn't assume that there is. Right? We should we should get to a place where we understand how

that works. And I get that it's hard, obviously, that the longer an impact takes to manifest, the

harder it will be to measure. But we can't just make the assumption that it did. Right? Then then you

always on the side of overinvestment if that's your assumption. And and so you essentially

skew it so you're always always getting it wrong. That, kinda reminds me of the old

saying that hope is not a strategy. Right. It's a good, reminder. But the thing is I think

it's fair I don't think it's unfair to say, well, this model doesn't capture the the longer

term impacts. Right? I don't think that's untrue. We can't assume that that

that that kind of mystery extra bonus impact is somehow always enough to

justify the investment. So I I think that's the point is that they there is a

tendency because you have you as the individual, not you know, depending

on your role, but you you buy media from someone. You want that media to have been as effective as possible. You

inevitably kind of towards this generous view of the response. And that

mindset is, I think, the problem, and and changing that mindset, shall we say, is the opportunity.

Because if you go out there and say, I don't want these results to show me that everything I

did was great. Because if I do, where do I go from there? I want them to show me that these three things didn't work

for me because then I can improve. Then I can then I can find opportunities to invest elsewhere,

then I can find more efficiencies. But the your kind of natural inclination as a human being

is to say, I just spent all this money. It must have been effective. Let's hope it's effective. And and

so if you can create a a framework where you work, where

you're actually looking to find places that you spend in the wrong place,

the ineffectively. And you're kind of you're happy to find those things, and you're those are the

opportunities. If you could switch it to that mindset, then I think that's where you're gonna get the growth and and the

efficiency gains. I love that. It's like, there's kind of, like, the the psychological

concept of growth mindset, but you're essentially saying if you apply those

principles in an analytical way to your view of

analytics and strategy and is marketing performing, you're kind of welcoming

in that healthy scientific criticism. And I think that that sounds like a really

awesome way to be thinking about your marketing. There's always something to do. There's always

something to improve upon. Even if the greater the greater good is there, it

could probably be improved upon, which to your point, and I I really love that. Yeah.

And I do think, to some extent, we are naturally inclined

away from that because you're essentially let's say you have three parties. You have someone who's selling you

media. You have someone who's buying media. You have someone maybe an agency in between. All three of you

want that media to have been effective. Right? So all three of you are are absolutely aligned in that.

You will take the most generous view. Intuitively, you will take a more generous view of what that

response was. But but if all three of well, I don't know if the media seller vendor is ever

going to take the view of well, actually, it didn't do that well. But if you do take that view as, let's say,

an agency and a buyer that, well, I I don't just want it to have done well. And

and I'm not going to kind of bend my mind to believe that it has done well. I'm gonna be as

objective as possible and look for those areas of of opportunity and to some

extent be congratulatory when we do find those pockets of what this didn't work. Okay.

That's fine. Now now we have an opportunity. Because if my results come back and say, well, you are the best in the world at this job. Everything you just did

is perfect. You've got nowhere to

go from there, and you can't possibly gain efficiency. So what you should be looking is is for

is is for those opportunities, and and you should be open to them. And I know that it's easy for

me to say because I I am sitting between finance and marketing, and and so it's a lot easier for me to take

this very objective viewpoint. If I was on the marketing team, I think inevitably my my brain

would start switching a little bit towards, well, perhaps we miss this or perhaps we miss that. But, yes, I

do think that's that's where and then when I've worked with companies that it's when that mindset's in place that

you've seen the most kind of efficiency gains. What's that dynamic like? I've had some really good

interest. We we've I've lived through it, worked on it, but also lately talked to some

good people on finance and in marketing. You're kind of that in between, it sounds like, in some in

some instances. Yeah. What is that dynamic like, and how do you kind of

help set that up for success? Your traditional stereotypical view

is finance wants to stop spending and marketing wants to spend more, and then you sit in between and balance

the books. I think that's an old fashioned view of it now. And I definitely I think

everyone can be aligned, but you have you have a a marketing team that's great at making

their marketing better. And they're they're focused on the day to day of making their marketing better.

But they're not necessarily the the most objective source of what did that marketing just do and and where

where should we be investing. And then you've got a finance team that's that's looking for as

objectively as possible where the best investments can be made, but doesn't understand the practicalities of every

single thing, every single investment decision that can be made. Right? Because they can't have that same depth that

is, knowledge as the operating teams do. So I think it's it's that area in between

where we try and help as much as possible by taking the objectivity of finance, by taking

the kind of operational awareness that that the marketing teams have. And we have to borrow that that knowledge

from them. I know that we have it on our own. But we can get close to that and say, okay. I I understand.

In theory, we would spend ten percent more on this channel. In practice, we can't because either this

channel is at a hundred percent or is it zero percent. There are those situations that come up, and I think it's it's

like understanding those nuances as well. This may be presumptuous, but do you think that the

best marketers and the best finance folks are able to kind of put on

the Dominic hat a little bit and kind of view of it that way while collaborating with

you? Yeah. I think so. I I hopefully, the Dominic hat is just objectivity. I think that's

the that's the thing that I am trying to to bring, as much as kind of,

methodological techniques and and things like that. But I do think it's that objectivity. And, yeah,

increasingly, you know, marketing is my numbers driven. Right? It's if you go back twenty years, it all looked very

different. And so, yeah, I think most marketers now have a really keen eye on, well, what is my

cost per lead and and and what should it be and what are the marginal cost per lead? All of these type of things, I

think, are are very easy for a for a marketing person to access now. And I do think,

from a finance side, that's true as well. And then I think it just goes to the next level of how

can we optimize, what changes can we make. That's awesome. What were some of

the experiments and tests and conversations that were

maybe most exciting or most impactful to you or the brands that you were working on?

Aside from the eBay one that we mentioned earlier, I think while I was at, Facebook,

we worked with a lot of smaller clients, not smaller clients, but we worked with a lot of advertisers.

And Mhmm. User level testing on media was just such a new thing at that point.

We're talking ten years ago because I'm I'm old. But we it was a while back that it

was such a new idea that, and it was kind of one step beyond what we'd

done at eBay. We would have liked to have done the user level testing, but we couldn't identify the individuals as

well as that. But then Facebook could. And so they have that user level test tool,

which now everyone can use. But back then, it was it was new, and and the questions were,

why would I do this? I have an MTA model or an AskClick model. I don't need this. And so a lot of

it was that conversation. But they were really good conversations to have because you're

challenging the orthodoxy of of using this model, which has

built an entire industry up to that point and and and challenging it, but but challenging

in a way that I think was positive and and kinda has has helped the industry. And you do I

think testing is such a is such a not a buzzword, but it's, like, central to the industry now. People do

test a lot. And, and I think being part of that early on was was really good because

you had these conversations with clients, and you you could see people's kind of eyes open to this idea

of, oh, I could do this. Right? I get it now. So they move from why would I to, okay, how

where else can I do this? Like, where where? And it was good to see the industry move move in that direction as

well. The other one I would just tag onto that is is if you ever work

on models, and they're not tests per se, obviously, but there's there's just a degree

of magic in modeling. You take all of this uncertainty and you can if it works well,

you can kind of create a a model that explains so much. That's amazing.

Yeah. I think they can be they can be really powerful as well. It's it's a

pain. They're I've worked on a lot and they've never been easy easy, but they're certainly

getting easier now just because people have great data. I don't know if

you've talked to anyone regarding Robin, the the the kind of the open

source m m model that Facebook created, that's a really good starting point for anyone as well.

That's super helpful. We've we've run into it a little bit, and we've we've done some lift

testing through there. But I I love that call out, and it's a really great one for folks to kind

of hone in on and look at as well. And for those that are not as familiar with

would you say that how would you kind of describe it for folks new to it and

kind of what how it's maybe better than other views of incrementality?

Yeah. It does attempt to tease out that specific question of incrementality. So you

have all of your media channels as inputs into a big regression model, and let's

say you're trying to predict sales, and you can put in what I spent on TV.

Again, it really was born out of TV. It's such an old, methodology Mhmm. Only because TV was

was so hard to measure. But but now you can put all of your other channels in there too. So you could look

at spending on Google. You can look at spending on Facebook. You could look at even things like outdoor advertising.

All you because what you're really building is a model that says, I spent this at this time. How did

my sales respond to that? So that's the that's the the kind of theoretical part of it. You in

order to understand what your media did, you have to understand what everything else did as well. So

that's the hard part. You have to understand seasonality, and you have to understand if if if weather affects

your business, which it it does for a lot of businesses. And, you know, holidays, all of these

other kind of pieces that that fit together to to determine your sales on a given day. You can't just

put your ad spend in there. You have to put all of those things in there to build it. The great thing

about it is that it's agnostic to the user level path. So it ignores the last

click model. It ignores the impression model. So you're losing information there, but you're gaining something

by by having an independent view of it. So it's a great way of kind of calibrating your MTA because

it's not biased by your MTA. If your if your MTA says, I spent this and there was this clicks

and this much came back, and your model says the same thing, they've they've arrived at that conclusion from

different places. So it's really good as a as a way of, benchmarking. And then the

other thing, obviously, if if you have channels that aren't covered by MTA or aren't fully covered by MTA, it's

it's a great way of of measuring them. I think the other thing is just that

if privacy rules change your kind of information that you have on path to purchase,

if you don't know if people saw it or if they clicked on it, and I I suspect increasingly in the future that might be the

case, then this model doesn't need that. So it's it's also a a kind of future proof

model. So I think if you went back Huge. Yeah. And I think if you went

back, you know, a few years, were were outmoded and and old because

you have MTA models. Why do you need this? And and then I think now there's a an increasing recognition

that, oh, this is actually a great way of of covering the future and not just the past. That's really that

kind of dovetails perfectly in my next question, and it's so exciting. I can envision a world

where it sounds like you're saying for those listening that, hey.

Privacy concerns go up. MTA visibility goes down. Generally

speaking, the growth and importance of is even more

underscored. Is that safe to say? Yeah. I I think so. And it I think it

underscores this idea as well that I know you and I are probably probably aware of, but just

generally, we don't need to know the individual person's conversion. We don't

necessarily care that it's this person. We we only use that information to draw to

draw a a a kind of path and a and a map to to clicks and and impressions and things like that.

But we don't really, really need to know it at that level. And so I I guess I say that just so if

people are worried about when I talk about privacy, believe in personal privacy, and I and I, I don't

want to know what you bought. I want to know that x sales happened because this

happened. I don't need to know anything about any individual person ever, and I don't wanna know that. And there's too

many of them. I'm just trying to draw that path. And, and I do think drawing that

path does clash with personal privacy sometimes because as an individual, maybe I don't want people to know that

I saw this ad and I clicked on this ad and I bought this thing. I it doesn't matter. But then I do think

because the the model ignores all of that layer of information and

just builds it out, the the the kind of, the aggregate that it will

always be immune to any changes to privacy that we have in the future. So, yes, that's that's the benefit I

see. I love that. There's another theme I'm thinking of around the specifics of

and it does kind of relate to, like, past versus future, and maybe you

can guide me here. So is looking as a look back at the past

performance to kind of assess if marketing worked, or is it can

it be predictive of future? Does that make sense? Yes. It does.

If if it can never be predictive or prescriptive, it's a little value. I mean, I guess it there's some

value. It it helps you understand where your investments were, and maybe you can make big kind of changes

based off that. But, no, ideally, what you get out of that model is also the the

kind of scenario planner piece of it where you say, I have diminishing returns up to this

point. So I could spend up to here, and then I could be spending here in these different channels. So, yes, ideally, it does

help predict the future in terms of how much you should be spending. It also can one of the

I mentioned that you have to take into account seasonality and all of these other pieces, And a lot of the

public sorry. The the, open source, model, Robin, for example, does that

really well to the extent that you can actually use it for forecasting as well. So even if you weren't

working with marketing data and you just want to know how many sales am I likely to see if these

things continue, it works for that too. So it has a lot of really helpful side

effects that that come out from building this model. And that's when I mentioned there's a bit magic in there because

you go in there wanting to work out what what my TV spend did and what my kind of Google spend did,

and you come out understanding seasonality and whether you'll

have a long term trend that's going up or down when you take all the noise out on all of these pieces. So you do

get a lot of of useful predictive pieces from there too. That's amazing. Yeah. Quite a testament

to to the power of that model and the fact that it's open source, I would venture

really creates makes it more valuable. Is that is that accurate? Yeah. For

sure. I I have a bias to wanting to build things in house, so that

so that you can fit all of your in house pieces together and you can understand everything, and and you can

see inside it. And so, yeah, I like the the open source model for that reason. It it

takes effort, but at the same time, I think it's worth it to to have something internally

that that you truly do understand because the outputs of a black box model are

useful, especially if they do give you very prescriptive, go and spend this here at this time, and this will

happen. They're useful in that sense, but there's no substitute to really understanding what's going

on underneath there. And so you can know, well, yeah, this may this may be true, but the margin of

error is really wide, or I'm really confident that this will happen and and the model's always going to

keep keep giving me that. I think you get that extra layer of understanding if if you build it in house.

Yeah. No. I love that. It's really fascinating because I think they it sounds like there's a lot to

be built on top of, and I imagine a number of businesses are doing that probably a

mix of in house and out of the box, I'm guessing. Or I think so. I you

have little choice sometimes to to go with third party things. Right? Depending on what you're trying to

measure, sometimes you just don't have that layer of data to to do things. So I think there are

occasions where you inevitably will will lean on someone like

Facebook lift testing, for example. You you can't run that same test internally. You have to rely

on Facebook, and then therefore you get less information back from it. So So I think you're always balancing those pieces,

but, yes, third party tools are definitely useful. Sometimes they're essential because

you can't do it without them. But if I have the ability to to build it in house, my

my bias and my preference is always to try and do it internally so that that level of data

exists and exists to to feed so many different pieces as well. Yeah.

I love that. And it is there kind of spending sizes where you kinda say, okay. I'm kinda

jumping ahead a little here thinking through how you're counseling client you know, clients.

Primarily, you're you're really a great aside from Facebook, correct me if I'm wrong, you're really

representing one brand and getting a lot of those questions from finance and

marketing. Right? Right. But thinking through, like, in a hypothetical scenario, a brand that

spends a hundred million a year, maybe they or sorry. They're making a hundred million plus a year. Maybe

they're spending, you you know, minimum five a year on marketing, which isn't

a lot in your standards. Mhmm. Where do you counsel them? Let's say they're on five or

six performance marketing channels, maybe maybe a little bit of TV, maybe a little bit of audio. Like,

is there is there a spend level where it's like, a lot of this doesn't make sense? And it

kind of like is there a how do you kind of guide those brands that are

kind of in those situations that might be working their way up to being, you know, a, a

FanDuel? It's a tough question because the answer is it depends, and I know that's not a great answer.

But, basically, it depends less on the budget and more on the impact. So if you

are working at a company where the impact of marketing is is

fairly small compared to the natural baseline that that that company has. So I I don't wanna presuppose

any other companies, but let's say Amazon. Right? Obviously, the Amazon is really high. If Amazon stopped

spending on marketing, what would happen? They wouldn't disappear overnight. They would drop by something, and I don't know what

that something is. But there are smaller companies

impact, then it's very easy to measure, and and it doesn't matter so much on how much you're spending. It's it's what

that what that share is. Now that's actually the answer to the question rather than the question. Right? Because you want to

understand how much is being driven by by marketing, and you don't necessarily know beforehand. But I

would say if that signal is strong, and I've worked at places where the,

the spend is not necessarily that high, but the signal is strong, then it's a very easy model to

build. Conversely, though, if you're if you're moving the kinda top line by one or two

percent, I don't know Coca Cola as well, for example. I'm sure that their advertising has a massive

impact, but I'm sure it's also very hard to measure because there's so much of a baseline that that you're building it on.

So it it depends in that sense more so than how much am I spending. It's how

much of my of my sales, shall we say, are being driven by marketing. And if

it's less than, say, five percent, it can be really noisy, and it can be really difficult to to pick that up,

especially if you break down into the smaller channels. But if it's twenty percent, yeah, you should you

could build a model very quickly that that that kind of detects that. That's interesting. And then and

if you're kind of in that hypothetical scenario and you kinda have a, well, suite of

cards that you can kind of hand out to say, okay. We're gonna run this type of a test or we're gonna run this type

of a test. Not like to say that the number of options is the key,

but just out of curiosity, is there, like do you have kind of five

options to choose from? I know it's a little overly simplistic for your world, but or is it kind of

like, hey. These are the two that I kind of go to, currently? If if

you can do a user level test, there's never really a a great cost to that in terms of

opportunity loss. So for example, if you're sending out an email campaign, you can keep back ten

percent of of your user base pretty easily. You arguably lose ten percent of your

total impact because you did that, but then also you measure that impact and then you can optimize it in the future.

So going back to the the scenario we talked about earlier, if it was a launch campaign, I probably wouldn't do that. If

it's a campaign that's going to go out once a month, then I really would want to know how effective it was and

how I could impact the effectiveness of it. And so I would keep a a holdout. So I don't think there's a real

cost there, and and I I would put it to user level k, user level test in every chance I could. And I don't

think that changes how how you spend, and I don't think it changes the response that much. Facebook, for example, if I'm

doing a a Facebook campaign, I would I would just put a user level test, and it's I think you can pretty much set it up

by default and and get the results every time. So there's there's really no reason not to. So, yes,

I I think those ones those really easy, nice, low touch ones, I I would do every time.

If it comes to things like geo testing or pulse spend testing as well, if you're

trying to build a model and then turn your spend on and off, It's slightly disruptive from a

from an operating point of view, but it it does help you read into a model. So those

are slightly more disruptive operating things, the geo testing, pulse testing.

Those are but those are ones I would do if I really wanted to measure. And so I think user level test,

really kind of a no brainer. It's very easy. Pulse testing, it's it's easy to to kind

of operate. It's not that easy to measure, but it's it gives you more than than the absence of those

things. And then geo level test, yeah, it's slightly harder to operate against, but it it gives you a a

nice read. So it's, I I would do those That's great. I would do those things if if,

but only if I really wanted to measure them. And I I think I'm, I'm not I'm

not unique, but one of one of the things I try and push as an analytics person is I don't want to

shoehorn everything into my measurement solution. I don't want you to do these things because I I want to be able

to measure it. I I need to know if it's if it needs to be measured first to make sure that we have that right

path of why are we doing this. We're not just doing this because I want to do it. That's not the right reason to do it.

Let's do it because we want to replicate it. And if we don't want to replicate it, then do we need to do

these things? So it's it's getting that right path between making it measurable and and making it consequences

of

consequences of going through a a test? And do you think there's a there's a percentage

of folks that do that? I'm biased towards doing tests if if we can,

naturally because because otherwise, I wouldn't have a job ultimately. But, yeah, I I'm biased towards doing the test.

I I do think, I think it can create conflict when

we're saying, actually, we just we just lessen the possible impact of this because we

wanted to test it, and we're not really using those results for anything. Like, those those type of

scenarios are the ones where I think it creates conflict. But I think if you lay out beforehand and you can lay out

before with any kind of test you do, almost have this was it like pseudo code logic where you

say, if this happens, then we'll do this. And if this happens, then we'll do that. If you can't articulate that before a

test, then there's not that much point doing a test. So if you can't if if you say, well, I'm gonna hold back ten

percent of my email campaign, And what will you do if it doesn't pass a certain threshold? Nothing. I'll

do exactly the same thing. Okay. You don't need to do that then. It's not going to change anything. So it's that it's that point

that that I I try and make team. What a great reminder for so many. I think that's

super, super helpful. We talk a lot about, like, data literacy in our

organization and training and understanding data. How how how have

you done that in your career? How do you help kind of demand that and collaborate with

your teams to make sure that that data literacy is there? I think I've been

fortunate in the working with most marketing teams now, especially the digital marketing

teams. There there's usually a very good data literacy as as standard. I think the nature

of the the business has has certainly made that almost a prerequisite for a lot of

the roles. I think the other piece is having a company that does just have the data and

has kept the data in the right place. And and then most of my experience has been with,

bigger companies or later stage startups where they've kind of got that piece already and that that's

been done. So I've been fortunate in that sense because I do think if that's not there, it's hard to get everyone

aligned to a certain metric. I think if you've built on that bedrock of just having the data

available and people also being confident that that data is right. And so when they look

at, a number being unusual, the question is not, well, what's wrong here? Did we

just we do we do we we forget to, like, run a a data poll or did we do

whatever? I think if you get to that point where they're they're confident that this is representative of a real thing that's just

happened, And then the question is what do we what actions do we take based off that? I think that's

that's a good kind of level to be working with day to day. You do need to get to that

point. And I think for smaller companies, it's harder just because there's there's more volatility in the data.

But once yes. Once you've got once you're confident that the thing you're looking at reflects real life and not

some artifact of the data, then I think you're in a good spot. I love it, Dominic. We

talked a little bit about, football UK soccer. Maybe you can share,

for the audience a little bit about your your team and and Okay. Who you're going for.

So this may for some people, this is going to be gobbledygook, but I,

I support Nottingham Forest, who are now in the Premier League, have been for the last

season and a bit. And it's been so exciting because they were out of the top division for so long, for

twenty years, and they just got into the top division. They only just stayed up last season, but now

they're they're looking good. So it's very exciting for me because I get to watch them on regular

TV all the time now. And it's just it's such a step change from from where it was before. It's it's

really good. So I have no complaints because they are not expected to win every

game. They're not I'm not disappointed when they don't win. I'm not even that disappointed when they lose. And when they do

win, I'm excited for an entire weekend. So it's a very I'm I'm really fortunate right

now. That's really cool. How how did you become a fan of that particular team?

Oh, it was just my local team, so I I kind of got into it by default. So I

was born in Nottingham. Love it. And any particular players to, follow or that

you, you live on? They have a lot of good players now. There's, there's

a a a guy called Ibrahim Sangare who just he's, I think, our most expensive signing

now. Morgan Gibbs White who is on the cusp of the England team, although that's a that's a tough

midfield to break into. And then there's a new center back that we signed from

Brazil called Murillo, and he is he's only played two games, and he's so good. And so

maybe if we replay this in a year, people will be talking about Marinette like, oh, yeah. Obviously, we know who he is.

He's, like, the best center back in the world. We we could be doing that or or people may well know

not know who he is, but I I think he has potential to be a to be a big name. I love it.

I did just watch, relatedly, England, Italy. Jude Bellingham was

playing. He doesn't play for Forest Place for Real Madrid, but I actually think he's probably the best player in the world

right now. And and he plays for England, so that's nice. Absolutely. I love it. You guys gonna win another

World Cup? Maybe the Euros. They the England just qualified for the Euros

in Germany next year. So, that's a that's a realistic one, I think. We came second

last time, so it's it's not Yeah. Yeah. Yeah. Dominic, it's been a

pleasure, man. You you shared so many great insights. I'm really grateful for the time you shared with the audience

and and your knowledge and expertise. It means a lot for folks that wanna follow you and and

maybe, learn more about you and your story and what you're working on. Is there where

can people find you or if you'd like them to? Reach out to me on

LinkedIn. It's Dominic Williamson. There are a few Dominic Williamsons. There's not that

many. Look for the one who was at eBay and Facebook. They'll probably find you the right one. There was a cricketer called

Dominic Williamson. That's not me. He's almost the same age as me. He played cricket in England at the

same time for a local club. So he is is is the slightly more famous Dominic

Williams, and I would say, but unless you're interested in cricket. If unless you're interested in cricket,

I I wouldn't I wouldn't bother him. How's your cricket skills? Oh, awful. Awful. It's,

it's the worst sport for me, I think. I don't know. Me too. We're in good

company, Dominic. It's a pleasure. All the best man. And talk to you soon. Thank

you. Cheers. Bye.