Always Be Testing

Guiding you through the world of growth, performance marketing, and partner marketing.
We sit down with growth and marketing leaders to share tests and lessons learned in business and in life.

Host: Tye DeGrange
Guest: Tomas Saulsbury-Hunter
Hype man & Announcer: John Potito

Timestamps:
00:00 Introduction to Tomas
02:39 Overview of Commission Junction and its services
05:23 Importance of implementing a test and learn function
08:01 Setting parameters and budgets for testing
09:59 Allocating budget for testing new publisher models
12:30 Allowing publishers freedom to allocate funds
15:23 Importance of not relying solely on organic growth
18:12 Understanding the holistic downstream value of marketing
21:31 Bringing unexpected elements together for success
24:21 Importance of considering long-term customer value
27:01 Affiliate marketing as the "channel of channels"
29:37 Exploring interesting developments with browser extensions

What is Always Be Testing?

Your guided tour of the world of growth, performance marketing, customer acquisition, paid media, and affiliate marketing.

We talk with industry experts and discuss experiments and their learnings in growth, marketing, and life.

Time to nerd out, check your biases at the door, and have some fun talking about data-driven growth and lessons learned!

Welcome to another edition of the Always Be Testing podcast with your

host, Ty De Grange. Get a guided tour of the world of growth, performance

marketing, customer acquisition, paid media, and affiliate marketing.

We talk with industry experts and discuss experiments and their learnings in growth,

marketing, and life. Time to nerd out, check your biases at the door, and

have some fun talking about data driven growth and lessons learned.

Hello, and welcome to the Always Be Testing podcast. I'm your host, Ty

DeGrange. With me today is Tom Salisbury Hunter from Commission

Junction. We're super excited to get going. Welcome, Tom. Thanks for having me.

Absolutely. We're excited to have you. Always be testing pod where we talk about

the growth performance marketing, testing experiments in business and in life. Super

excited about this episode to dive in with Tom Salisbury Hunter, the vice

president of client services at Commission Junction. Welcome again and, ready to dive

into it with you. Yeah. Me too. Been looking forward to this. Heck yeah.

So give a little bit of background on kinda just the basics of what you do high level

for the audience. We have a lot of performance marketers, growth people, affiliate people.

But for those who don't know, I'd love to get a sense of what you do. What's

commission junction. Give us a bit of a breakdown. Yeah. So CJ is,

one one of the world's largest affiliate networks. We work with a huge

multitude of brands, but, primarily, my team is working with

BlueJeep BlueJeep, enterprise brands to provide affiliate marketing strategy

and tracking publisher payment and all that good stuff for their for their affiliate and the

partnership programs. CJ also runs some CJ

leads lines. CJ also runs CJ influence. So we do,

influence. We do social. Those both sit in my room yet, but primarily is its core

affiliate. Awesome. Awesome. And what what kind of, like, like, brands are you

kind of working with maybe throughout your career or also throughout, you know, your management

of clients at Commission Junction? Yeah. So I've been with CJ for about ten years. I started

off as an account manager. I come from London, which is why I speak like this. In our

London office, I was working with Argos and TUI, who are a huge retailer and a

huge travel brand, out of the UK and Europe respectively.

But over time, I've developed. I have touched,

I would say, probably in excess of a hundred to a hundred and twenty different brands. Now being VP

knows my team's underneath me to look up the the the vast volume. But everybody from

Expedia, who've been a a long time client, Disney, who've been a very long time client, to

Nike, to, CIT Bank, USAA, Experian, TurboTax,

Intuit. The list kind of goes on and on. I love it. We tend to, again,

focus on enterprise brands, but I do also have a a large number of mid market and kind of mid

to large brands on the meet. That's awesome. What are some of the things you think brands

need to be thinking about in the affiliate marketing space? What do you think some of the things they might,

missing out on or or not thinking about? I'd love to learn more about some of your observations and

learnings working with all these great brands. Yeah. I can kinda meta answer that question.

There are a lot of different things that brands are missing out on, but one of the reasons that I was excited to be

featured on this podcast is I think the biggest thing that most brands miss out on is working out what they are

missing. It's instituting some kind of testing function, learning

function, something along those lines within their affiliate program so that they can start saying, okay.

We don't do x currently, and maybe we don't have the data that we

need or the, maybe, the expertise that we need on on their own side to forecast what

x would do from a ROI's perspective, but we are still gonna test that. We are still gonna work out whether

it works. Test and learn functionality is something that we try to instill with the majority of the

brands that we work with. Test and learn functionality, I think, is probably a thing I see

most regularly missed within the affiliate industry or misimplemented within the

affiliate industry. I love that. Obviously, it's the namesake of the pod. It's really

central to how my team and I think about and see so many

companies that tap into experimentation are the ones that are really

capturing more value, capturing more demand. They're on that

kind of more bleeding edge of the curve. So I think it's such an interesting topic. When you

think about an experiment or a test, maybe just explaining for the audience, like, what

what is sort of your definition of of a proper test that a brand that you

might manage should be thinking about running? And how do you kinda just define it?

Yeah. From a methodology standpoint, it's typically pretty simple, actually. The majority of

the time, the reason that I see the need for a test is we don't have a good

enough amount of visibility into whether this will definitely return, you know, the CPA, the ROAS

that we need, the effective CPA or the ROAS that we need on a large scale. And

so I would advocate for putting aside a smaller amount of budget that's not gonna affect your

overall ROAS, your overall program, just to flat out test whether it is

gonna return a ACPA for you. Now the actual output is gonna depend on the client.

It may be that you just want straight up last click CPA through your affiliate

network, through, you know, chosen attribution provider or wherever else. It may be that the

the action that you're looking for in, you know, in CPA is a lead. It may be that it's a

volume of impressions. But, really, it's it's putting money down. It's working out the amount

of money that you're willing to spend on x to try to work out whether it will work for

you. It's agreeing on what the KPIs that you're looking for as an output from that would be.

And then it's running a test in good faith, which I think pretty much everybody that ends up doing

this does, where you are making the best possible effort to allow that

publisher to allow that strategy to allow that partner, to achieve whatever that that

CPA looks like. I love that. And what do you find to be some of the pitfalls

for clients and brands that are kind of going through that test process? Like where do

they kind of falter? Where do they go wrong? How do you kind of coach them on that? Yeah. So the

pitfalls themselves, I see, I guess, three different pitfalls that are that are pretty

regular. One of them would be going into the test with a bias, going into

the test assuming this is how this is gonna act. And so

not to the point that I made a second ago, kind of not throwing everything that you can at it, not giving the

partner the best possible chance for success. If you go into somebody and you say, I'm only gonna invest

ten bucks in this, and I need you to return absolutely everything that I that I've asked for, there's

no way that partner's gonna do that. You need to go in. You need to say I'm willing to fund this to the extent that you

tell me you need to to drive results. After that, you can work out what scaling

back looks like. You can't go in and say, I'm only gonna give you, you know, a little bit of money. I'm only gonna give you

a little bit of investment or time or assets or whatever else it requires for success. And then be surprised

if they come back to you and say, well, I didn't have enough to drive, you know, what, what my full

potential was. So the the kind of bias or the caution, I guess, as well

is problematic. On the other side of that, I have seen problems where people haven't had enough caution

with the overall structure of their test and learn programs, and they've been willing to just kind of throw money at

anything that comes across their plate. It's a really interesting thing to manage as as, a

strategist and as an agency slash network, because we're effectively going to people and

saying stop doing stuff. So you have to have parameters in place that ensure

stop spending money. Right. Exactly. Stop making like, stop paying me money, stop paying everybody money.

Affiliate network structures. And I'm sure agency structures as well have changed a little bit, so that it's

not always dependent on the amount of money that comes through us, but it still feels odd to say

to people don't do as much as you are at the moment, but it is super

important to say, to discuss as part of the setup of the test and learn program

whether the parameters that you have, whether the spend that you have assigned are at the right level, they're not gonna be

detrimental to your wider program. While I will advocate all day for testing, all

day for testing, I don't want somebody to run a budget or to risk a

budget that is gonna overall affect the veracity of the channel, the integrity of the channel,

their their ability to have good conversations with their boss. So one would be make sure you aren't too cautious.

Two would be make sure you're cautious enough. And then three would be ensuring that your

test and learn budget is rolling rather than something that you put aside. So I've seen some

brands say, okay, we have a twenty thousand dollar test and learn budget for this month.

They will find five thousand dollars of things that work, they will find another five thousand dollars the next

month, do that for four months and say, right, our entire test and learn budget is,

is working, but it's taken up. You kinda need to graduate people

that are in a a testing function that you have into your main program when they start to

work. Otherwise, you know, in that example, all you're gonna do is find four partners. Yep. And

so, like, without naming a brand, what's kind of like the best

structured test you've seen? Or what's like a textbook example where,

man, this this brand really set it up well. They had the right expectations going in,

kind of played it perfectly. Like, what what would be an example in your mind, without saying the

brand's name? I'd love to hear kind of, like, what happened. What was the result? Yeah. So about

about nine months ago now, actually, because it was just before q four. Don't test in q four, folks.

Or at least don't don't make big bets in q four. Fair. About nine months ago, we had

a brand that put aside a pretty significant budget, in kind

of objective terms. There are bigger brands that are able to afford it to test

a completely new publisher model. It was something that they and their competitors had

not really made any headway into. The conversations around the

misalignment of the potential for that publisher vertical and the achievement of that that

client and, and their competitors, was rife. We had an

enormous conversation about, you know, how can this be a six, seven, eight figure

potential publisher, and yet we are only making x amount of money. And the publisher

came to us and said, look. The reason for it is there is effectively a startup cost to this. It's not an

integration cost. It is you know, you need to be spending x amount of money in order to engage with our

audience. The temptation at the beginning of that conversation for that brand and actually for a

couple of their competitors that I happen to know about was to say, okay. We will we will put aside

this significant budget. We will give this to you. We are going to dictate to you

exactly how you should spend it. And the really it was a

really cool thing. It's kind of sad to say, but the really cool thing I saw throughout the process was the

brand's realization that is not the best way to set a publisher up for success.

That if we're going to judge whether a publisher is capable of making money, what we need to do is

say to them, here's the money. You tell me how you're gonna spend it. I just need you to tell

me at the end of it, I put my best foot forward. Can't be any conversation at the end where you say you hampered us, you

changed, you know, you you changed how we would have liked or worked. We have to know what you can

do if if the gloves come off. Obviously, there have to be branding restrictions. Obviously, there have to be,

you know, fraud and compliance and everything else restrictions, and there's zero

question about that. But, really, the control was handed over. It was here is the amount of money that we

have. Give me the most amount of money back within, you know, these these few

parameters that we have. And it was a wild success. I believe that brand is now something

like six percent of their total program. That publisher is now about six percent of their total program. It

did graduate. What type of publisher it sounds like there was a lot of conversation around,

like, controversy, for lack of better term, about partnering with this this

this affiliate, this partner. Can you share more about, like, the type of partner it was or, like, maybe some of

the why behind that analysis going into the test? Yeah. I don't wanna name

exactly what kind of partner it is because it's gonna betray the exact partner. There are very few people in the space,

but it was a a really emerging publisher model to the extent that there are probably only two

or three publishers that are in that space. The reason for the hesitation, I think, is really

the the reason that we need testing, testing our budgets within, the affiliate

industry at large. It was that there was zero data there was zero data to

forecast the CPA from. There was a lot of data that indicated that there was audience

alignment. There was a hell of a lot of data that indicated that there was not much,

actual customer overlap. So you had perfect audience. We

haven't really spoken to them. We know that that audience engages with brands that are different to us,

but shop in the same way, you know, the same kind of demographic, same kind of income levels. But there was

nothing to say, okay. If we try to execute this, this is what this is going to cost. The

controversy came purely from the fact that affiliates are CPA channel, and I think our

temptation as affiliate managers, affiliate marketers, clients, agencies, everybody

is to always be able to say, this is the amount of money that we're gonna make from this test, or at

least this is with a high confidence the range that we're gonna we're gonna make. Because of

the CPA structure of affiliate? Because of the CPA structure of affiliate and because it's the

paradigm within which we engage on a day to day basis. Right? When we're talking about established

partners, we're always talking about how much they made, what their CPA was, what we think their

CPA is gonna be next month, what we think the ROAS of this newsletter will be, you know, if we run it

on this day rather than that date. We always have that backstop of

data that allows us to reach a, hopefully, very accurate ROAS

range. And when we lack that data, I think the affiliate channel's kind of unique in

that it causes caution because nobody wants to put money understandably, nobody wants to put money

up upfront for a risk when, you know, there are twenty other conversations in the wings about publishers that were

already established with how you could spend that money with some kind of guaranteed range.

What interesting concepts. I I've what I'm hearing you saying is, like, brands need to come into

affiliate marketing with a very test and learn mentality. Right? Brands need to come

into affiliate marketing with a kind of check their biases at the door as much as they

can recognize them. Right? And you're saying, hey. Let's have a a small

percentage of a of a budget allocation to go towards willingness to

spend to learn, not necessarily spend to get a return. And then, hey. Once that signal is

met, we're gonna pause that test and move that success or that

winning test into the greater budget allocation that you said it's just evergreen and running

and per expected to perform. Yeah. Then you've got this other testing budget that's

smaller that can keep firing, keep going after those new emerging

counterintuitive things. Right? Is that kind of what your sounds like that's kind of your your point there,

which I think is really, really spot on. Yeah. That's exactly it. I think there was a lot of talk

in tech and marketing in general five five years ago, maybe even ten years ago, about

failing fast. Yeah. And I don't think you wanna try and fail fast with your entire

program. I don't think you wanna risk all of your budget. I think ring fencing a small amount of budget

to learn how to fail fast, to learn what successes look like, also to learn, you know, how we

run this and maybe we need to adjust x y z and run it again is

deeply needed. Otherwise, what you end up relying on is kind of just organic growth. It's, you know,

how can the established players in the game, your retail units, your Ebates,

your Wirecutters even, how can they drive me growth and

expecting that you're gonna see something different from those partners that you've been working with 05:10, significantly

different from those partners that you've been working with for 05:10, fifteen, twenty years

without, again, without testing and learning with them, without trying new strategies with

them. I I don't think it makes any sense to do that. There is no brand that all of a sudden is gonna drive

you 10:20, thirty, forty percent growth without changing how you work with

them. And there are very few brands who you can change who you, how you work with without putting some kind

of investment in upfront. Yeah. For sure. You you kind of touched on something interesting before

about setting aside that experimentation budget and counseling clients

and brands to say, it's okay if this loses

money. And I I think that was kind of like like share more about that. Like, how

do you how do people react when you tell them that initially, and how is how does that

conversation go? Yeah. I get a lot of wide eyes when I talk about it. The

the concise way that I will put that usually is if your test and

learn budget is consistently delivering you a positive ROAS, you are doing

it. Fundamentally, that will mean or you're psychic. And if you're

psychic, great, I wanna work with you. But if your test and learn budget is consistently raw as

positive or at least meeting your program raw as targets, it probably means that you're not taking enough

risks and that that should have been spent that just sat within your evergreen program. Or it

means that you're coming up against what I spoke about earlier, which is, you know, you've set a test and learn

budget, but you found four things that work, and you're continuing to call them tests for far too

long. You should graduate things that work into your core program.

And so by definition, that experimentation budget, that experimentation that you have

ring fenced should always be riskier below program

ROAS below program CPA in order to to source those those

positive rise new opportunities. Very cool. I love that, Tom. The

brands you've worked with are insanely awesome. You've worked you've seen a lot, you

know, hundreds of complex, challenging topics and issues

in affiliate marketing. What what do you think is kind of the some of the

emblems or signals of a really healthy, fantastic program? We've talked about

testing, But kind of when you, like, look under the hood and see what's available from a

strategy tactics mix perspective, I'm excited to just just get your perspective on what you've

seen work really well for for some of the best brands. Yeah. I think that's a

relatively easy one because I definitely have a favorite way to to make up a program.

So unanimously, the brands that I see

who are most exciting to work within the industry, not just for me,

but, you know, where I see publishers being enthused, I see teams being enthused, I see even the client

contacts themselves being enthused. A client where

they understand that there are different measures of value for

different partners in the space. Linking back to something I said earlier on, I think we can

get really focused on last click CPA. And on the last click CPA of

an individual publisher in an individual journey, what I see with really

exciting programs, partners, clients, even measurement tactics, where they start to understand,

okay, what is the full holistic downstream value of each element of

my plan? Is it somebody that drives the last click? Is it somebody that

grabs somebody who's nearly there and gets them to be there at a better rate than I can do

that myself? And if so, you know, understands the value of that because I definitely hear some

people that argue with the value of that and will say if they're nearly there, I don't want don't wanna interact with them. The other end

of the spectrum, though, it's understanding, okay, there are a load of affiliate partners out there. There is

a load of media out there that is only available through the affiliate channel and the partnerships channel. Mhmm. That

is not necessarily gonna convert one hundred percent on the last click, but I understand is gonna

contribute maybe just their partnership's ecosystem or maybe to their overall site ecosystem.

Again, not just understanding that that is true, but truly understanding,

measuring, and analyzing what that looks like and being able to have an open conversation

internally about why you are running a strategy with that if you're CPA based or on the

other end of the program, why you're running, you know, effectively bottom of

funnel strategies that help conversion. Those open conversations, that ability to measure, that

ability to understand every element of the funnel or the messy middle

or whatever terminology we're using this year tends to create, one, a healthy program, because

you have great mix of publishers. Two, healthy growth because you are

dipping into every possible pool that you can for growth, and you are leveraging

you're leveraging that you are pulling the levers for growth at both ends of the spectrum.

Like, when when we're talking about awesome, I kind of wanna include that as well. It allows you to play a little

bit of jazz. It allows you because you have that measurement, because you have that understanding, it allows

you to play outside of the parameters of just, hey, if we press this button, this number goes

up and start to say, okay, what happens if I mix these ingredients? What happens if I take this thing

and this thing that you would never would have expected to have worked together and bring them together to drive success?

I love that. Play a little jazz. That's, such a great, mantra for, I think,

many things. You kinda touched on briefly, like, last click. Can you

kinda describe, you know, some of the attribution models maybe that you've

seen work well? Obviously, to your point earlier, it's very unique to the brand, but what's

your take on attribution generally speaking? Where do people kind of not get it right? Where do you

like we're going back to the core question of what's that best in class affiliate program

look like, what what do you see often when it comes to attribution? Yeah. I would kick

off by saying for most clients maybe or at least for a large

number of of business types, LastClick isn't an attribution model at all. LastClick is a

payment model. Another one of my my mantras, outside of

playable jazz, which is actually one of my director's, mantras, which I love and and whatever else I set

up the stream, is it doesn't matter whether you pay your partners via

carrier pigeon. It doesn't. What matters is the amount that you pay them

and whether that aligns to what you're getting from them. Whether you're paying last click,

first click, fractional, carrier pigeon, placement, whatever

else. At the end of the day, partners are looking at what they're getting in their bank account, and they are

prioritizing your business based on that And not necessarily against your competitors either, just, you

know, in the wider context of their own business and who they're making money from. But equally for

clients, for advertisers, for networks as well, the fact that you pay on the last click doesn't

mean that your attribution has to be on the last click CPA. It is just a

payment model, and it is, I think, a pretty effective payment model. If you have, you

know, somebody looking at your program who has, any form of experience in the industry, they're gonna be

able to adjust the downstream impacts of that payment model to make sure that each partner is being paid the right amount

even if you're measuring on the world's most complex attribution system. On actual attribution,

the cool stuff that I see tends to be focused around, I'm a massive data nerd.

I'm a massive data structure nerd. And it tends to be focused around maybe some game theory

attribution. I see some great stuff with econometrics and models that take that into account.

It tend to ask tends to ask questions about lifetime value or at least long term value rather than,

you know, this individual initial transaction, which I think can be a really mild review

of of customers. I run I run a load of brands who, effectively do their

customer acquisition through the channel because they're a subscription brand, and then they get a long term

value. And I think every retailer, every travel brand, every finance brand, especially

finance brands, could do really well looking at things in that way. In what way? Can you can you

elaborate on that more? Yeah. Starting to think, okay. I'm gonna make ten bucks

on this person right now, and they might cost me five bucks to acquire. But I know

that this is the kind of customer that's gonna engage with my brand for the next five years, ten years, twenty

years, and accepting that that is a very different type of customer

to the other type of customer that legitimately does come in through the affiliate channel, which is, you know, the kind of dust that

comes in once and then never buys again. That latter form, those are the kind of customers that you wanna

say, right, is my immediate ROAS on this one sale

If it's not, it makes zero sense to make the sale. But if you're bringing in a customer with a slightly higher cost to entry that is gonna continue to

buy from you, maybe from within the affiliate channel, maybe from within the partner ecosystem, or maybe from within the partner ecosystem, or maybe from within the partner ecosystem, or maybe from within the maybe from within the affiliate

channel, maybe from within the partner ecosystem, or maybe, you know, direct in the future,

I see a load of value in in attribution models or measurement models,

that take that into consideration and allow you to chase down the things that the best for your

business long term. Yep. Okay. Hot hot topic. Is

affiliate marketing a channel? No. Yes.

What's your view? What do you think? I like that you're answering it that way. I I probably won't stop calling

in the channel because it people are kind of they want an easy way to mentally

conceptualize it vis a vis paid search, paid social, organic. And I think

that because it can touch all of those things and involves all of those things, it it

it shouldn't be it's not technically a channel. It sounds like you're in agreement with that. Yeah. I'm

in full agreement. Should I just leave it a no and we can move on? No.

I have, Want a debate. I'm about to go on the record being super praiseful

of my mentor and our head of strategy. And if anybody knows me,

that's listening, they're gonna know that I'm, I'm pretty British and dry. And I

don't often go all out and say, yeah, I'm behind the company line on this. But

my my mentor, certainly since I moved to the US, Summer Arrears is our, head of

strategy for CJ now. And I think she's absolutely nailed it with the description of what affiliate

is. She calls it the channel of channels, and you'll probably see in any CJ marketing

we've done over the past few months and something we're doing going forwards. The channel of channels or the channel

four channels is is featured within that. And what she means by that is it

is a structure. Affiliate is a structure which tends to be focused around a

performance marketing payout structure, but, really, it spans all of

your channels. And that's not just to say, you know, affiliate can do paid search or affiliate can do

display. It can. It absolutely can. The reason I said channel of channels and the

reason she says channel of channels or channel four channels is it enables

those channels to amplify their own success. If you have a great affiliate program,

you are able to use that to amplify the results from your paid search channel, to extend the

results from your paid search channel, extend the reach of it. Same for display. The same for social

and influencer. Very, very, very much with social influencer. It gives you

the ability to reach a much wider sort of keywords in paid search because you

can do that in a guaranteed CPA, a much wider audience of influencers because you don't need to get a one

on one agreement in place with them in your social channels and have a payment system upfront. You you

can say, look. This is a CPA structure. This is what we're all used to talk about. Really, when you think about even

your, you know, your site, your site amplification, your site, conversion, Really, when you think about

any part of your business, there is a partner within the

affiliate space that can help to work with them to amplify their own results or replicate

their results. Love it. Good segue. What are some emerging affiliate

partners that you're super excited about? I am all in

on a few different publisher models from a philosophical standpoint I'm seeing coming out at the

moment. I have no idea whether these are gonna be, you know, the biggest publishing models in

the world or whether they're not. And if I did, I'd be making a lot of money off of off of that

ability to predict. But interestingly, as I was thinking about this podcast, these were the people that

I that I was thinking about that I was thinking about when I was considering

how beneficial tests and loan budgets and experimentation budgets are in establishing.

So a few different models I really like at the moment, or a few different

emerging models that I really like at the moment. It's then a video media or connected

TV. I'm starting to see a couple of brands within the affiliate

space who are able to kind of do what I was just describing with paid search, where they say, look. We

understand that you have your own assets. We understand that you have your own channel. We understand that you're

not covering every possible customer that you can because you have to do some risk management.

So let us take those assets or maybe even let us work with you to create assets

so that we can cover those gaps on the CPA. Love that. Bullish on connected TV

as well. So exciting because it's worked so well for brands that I've seen

do it. You know, take the proper proper precautions, but execute it within paid search.

And I've seen it to some extent with retargeting as well, although it's a little bit less in vogue. I'm

also seeing some really interesting stuff happening with, browser extensions. I

know, people can get cagey at that term, but, one, I don't think they

should be. You know, when we're talking about testing, I think it's, super important to test your hypotheses.

We can talk about this later on, but I have some, some interesting data on existing browser

extensions. But what I'm seeing is happening with browser extensions at the moment is we're getting these

emerging partners who recognize that the benefit of a browser extension is really

to increase consumer convenience or to serve an immediate consumer need. And they're starting to

ask the question, okay. What consumer needs exist outside of

coupons, price comparison, cashback, which really tends to be the browser extension kind of

mantra right now. So Mhmm. We can name check these publishers. There's a there's a

an extension called Benny that I'm a huge fan of the publisher model for. And what

they do is they help they help customers discover pre loved secondhand

refurbished, items. It's particularly focused within, the

fashion space, at least for now, as they're browsing a customs

sorry, a client site. So I know that they're working with Patagonia at the moment, which is wearing a

Patagonia vest. So, you know, you're looking at this Patagonia vest. Repping the brand. And they flag

up, hey. Patagonia have their own pre loved program. You could buy this pre loved for

x amount of money. And that engagement in circular economy, but also that engagement in

recognizing that consumers have some consumers have needs to save money, but that that need doesn't always need

to be in the form of a straight up discount on a product, I think, is awesome. Absolutely.

Very cool. Very interesting stuff. Alright. You're from the UK, came over

when? Twenty seventeen, twenty eighteen? Twenty eighteen, I think.

K. What's better, football or rugby? I went to college in

Wales, so I have to say rugby. Plus there's more, there's more blood, which I think is always

that's what you're looking for in a sport. Right? Yeah. That's the idea with the whole that whole genre.

How about, baseball or cricket? What what's better? Come on. Oh,

If you asked me before I went to an NBA baseball game, I would've said, sorry,

NBA. Okay. That's my British.

I would've said cricket. I've seen four baseball games in my life and I've

loved every one of them. And I've seen five cricket games live in my life, and I've hated every minute

of those experiences. Maybe we need to cut that bit out. So I think baseball might

be winning me over. It might be the best. That's the most insightful. That's very

that's a hey. You know, a little something for the British audience, a little something for the American

audience. Everybody's happy. You nailed it. We talked about emerging partners. You're talking about

testing, talking about best in class programs. You have a little sneak

peek of what's coming. Behind the curtain, you're working on some really interesting data around, you

know, how incremental are some of these browser extensions and plug ins. So, Tom, I'm I'm super

excited about all the stuff you you're thinking about around browser extensions,

toolbars coming from that place of testing, learning,

objectivity. There's obviously a lot of controversy, and and people kind of rile

at the the thought of them. How are you thinking about them? How are you approaching

it? What are some of the learnings that you're working on on that topic? Yeah. That definitely controversial.

I came into the industry 10:12 years ago, which were all you you

considered my entry, and I think it was right when browser extensions were just starting to enter the

market. And so I've been there from the beginning when I was and still

remain to some extent a massive skeptic of of anything, that comes into the market that was

to be engaging with the customer that's already deep in the sales process. And

for at least seven years, I've been having,

head on conversations with clients about how we test that, about how we measure

that. I spoke earlier on about kind of overcoming biases, and I

would I have been challenging clients to

test their biases, to test their theories about browser extensions. I hear a lot of people using a

common sense standpoint where they'll say this person's already on my site, and so they're gonna buy anyway. And

I've been desperate to test that over time. I legitimately think we've just hit on a

methodology that allows us to do that. So we're this isn't public public

yet, but we can talk about some preliminary results on the podcast. We identified a tech

partnership with somebody who sits on a large number of client sites.

We have, I think, about seventy million journeys that we've looked at, who can

see when a browser extension fires. They can see

if somebody has a browser extension installed, and they can

determine the difference in behavior between people who have an

extension installed and did not see a message and people who have an extension install

installed and did see a message. We might need to cut this bit out or follow-up on this afterwards,

but, that is while also determining whether

that message was proactive or reactive. So they're able to say, look. This person would not have

received a browser extension message unless the extension chose to fire.

And when we look at that en masse, when we look at that across seventy million journeys, we see

a significant uplift in revenue per session from customers who received

a message from a browser extension versus those that didn't. So Interesting.

Yeah. When we when we're saying, I think this person was on my site, so they were gonna buy anyway,

my my knee jerk response to that was always site conversion rates are about three to four percent on a great day,

so ninety six percent of them weren't gonna convert anyway. And what we're seeing from this data is the

extensions is the people who interact with your customers who are already deep in journey are able

to significantly uplift revenue for the for the customers that they interact with.

And really interestingly, on the flip side, we were able to see how Amazon Assistant

affects people, which purely exist to drag people away from your site into Amazon.

And it shows the inverse in the way that makes the most possible sense. If Amazon's

assistant says, hey. You shouldn't buy from these people. You should buy elsewhere. We saw a massive

decline in revenue per journey because we're seeing the exact inverse happen. The overall

indication was consumers seem to listen and respond to messages from browser

extensions in a way that genuinely, genuinely surprised me. Like, I went into this

without bias. I went into this wanting to test hypotheses. Obviously, from a network

perspective, I hope that it gave us the, you know, the answer that, yeah, what we've been doing for

05:10 years isn't isn't completely without worth. But I think the extent to which

we saw we saw an uplift, and we'll have a white paper coming out at some point in the near future that

really delves into this was surprising to absolutely everybody involved. I love it.

Excited to read it. Excited to dig into it more. I think it might be, time for

a a mini part three just to delve into that topic on itself even who

knows? Maybe with white paper in hand. Yeah. But it's been a pleasure. Loved loved chatting

about all these topics with you, Tom, and really appreciate your time today. It's been awesome. Yep. It's

been fantastic. I love doing it. Thanks, man.