Future-proofing the humans behind the tech. Follow Jon and Phil on their mission to help marketers level up and have successful careers in the constantly evolving world of martech.
Phil Gamache: [00:00:00] What's up everyone. Today we have the pleasure of sitting down with Justin Norris, director of marketing operations at 360 learning and the creator and host of revops. fm, a new podcast about revops, but also marketing ops, martech, a bunch of other stuff. Justin, I've followed you on LinkedIn for quite a bit of time now.
It was fun being on your podcast. So we're changing roles here a little bit. So for the folks. Justin kicked off his career at CGA Canada as a marketing strategist and later joined a startup in Toronto called clear fit as their third employee in the first marketing hire, where he were all of the marketing hats. He then joined Percudo, the esteemed marketing operations agency as their Marketo solutions architect. And he was later promoted to senior director of solutions architecture at Percudo. He's currently the director of marketing operations at 360 learning. Like I said, they're an LMS that features. And late last year, Justin also launched the rev ops FM podcast, a [00:01:00] weekly masterclass on becoming a better revenue operator.
Justin, thanks so much for your time today. Pumped a chat.
Justin Norris: Excited to be here, guys. It's a dream to sit down with you both. So thank you.
Jon Taylor: Yeah, well, thanks for joining the show. Look, it's impossible to ignore right now AI. So let's just deal with [00:02:00] it and talk about it right now. One of the things that I think a lot of folks in marketing technology are dealing with is this fear of missing out with AI. Up until early last year, I really didn't actually play with AI too much.
I used it a little bit here and there as I saw things come out on LinkedIn or Reddit and just kind of tinkered. You know, a year later after the launch of GPT 4, it's a daily part of my consulting practice. I'm using it with my clients. I use it more than Google for finding information. This is an SEO saying this.
I think there's a lot of people who are sitting on the sidelines or kind of looking from the outside or feeling a little bit overwhelmed by all this. You go on LinkedIn or social media and it's like prompt guides and prompt engineering and you're like, what? What can I do to make sense of all of this?
I saw you publish something really interesting on LinkedIn around marketing use cases for AI. My question here is, do you think people should have a fear of missing out? And what is the kind of the delta between the reality and [00:03:00] the expectations of AI as we stand here today?
Justin Norris: Yeah, I mean, yes, I think people should have a fear of missing out, a fear of falling behind. I think there's also a legitimate fear of, so just the FO part of the FOMO, a fear of AI and, socially, ethically, like there's that dimension and then there's even just professionally, what does it mean if, you know, all of a sudden all of our emails, all of our LinkedIn posts are written by AI. Is that a good thing? So I think those are discussions that could be had. I'm a bit like you in the sense that I'm maybe six months behind you in terms of adoption, I think. Certainly way behind Phil in terms of, you know, adopting it for the the image generation and all that sort of stuff.
I have this weird contradictory impulse of being a technologist who really loves technology. And then I'm also really, a technology hipster, where if something is the bandwagon, I really don't want to be on that bandwagon. I'm like, ah, these people with AI. So I kind of take a skeptical eye.
I've seen enough of a yearly hype cycle to know some things stick around and [00:04:00] some things don't. And I think it's very clear that this should this will stick around. And so when I see something that reaches that point. I then launch into, well, I want to bring order and structure to this cause I don't like just sensationalistic things.
So a little bit of like a grumpy old man approach where it's like, all right, if we're going to do AI going to do it properly and going to, you know, map out the universe, figure it out. And that's what kind of led me to that that approach that you mentioned of publishing that list of use cases.
Cause I, I don't like adding noise to a room. I want to see like, where can I add value? And I think you both take this approach with your podcast too, from what I've seen, like you really like to break things apart into their pieces and cover every angle. And so I thought, there's a lot of like tools and I've subscribed to newsletters and I'm seeing all these prompts, but no one's really looking at it from the business point of view, saying, here's all the things we need to do. Where can AI help? So here's where I can add value and I have the sort of tenacity and stamina to like literally go through and map that out. And that's what I [00:05:00] did and I've gotten some good feedback on that. So I hope that's helpful. And and then based on that, you know, I've started to play around and try to incorporate that into my practice as a marketer and a marketing ops professional.
Phil Gamache: Very cool. Yeah. I remember seeing that resource and I think you had 36 in, in, on your list there, and maybe you've added a few over time, but yeah, when that jumped out to me that I'm particularly excited about seeing more adoption of is this Next best action kind of idea like recommendations and also the audience selection or propensity modeling idea.
You should add a potentially iterable or there's a lot of other tools in there. Opti move as well to that list of tools that are playing around with next best action. The tooling is great in theory, but it's the frameworks that's hard, right?
Telling a team that's using rule based automation to now start sending emails based on propensity models and likeliness to buy or likeliness to churn. It's almost like starting from scratch and rethinking how you're [00:06:00] approaching all these automated nurture campaigns. How in your mind can we help marketers navigate this change from rule based automation to propensity models?
And what do you see in the future of quality assurance, like QA systems? What do those look like when we get closer to start letting AI take the wheel when it comes to email and messages?
Justin Norris: So there's a lot in there to break down and I know this next best thing, I think it's been like the Holy grail that people have been chasing for a long time. Like for years, we've been hearing about it and we're sort of at a place where it's possible. And I actually have spoken to two different. Founders in the last few weeks alone who are like working on this problem in a B2B context and my feedback was the same in both cases, which is, it's super interesting, but we have to subordinate the technology to what actually works as a marketer. And I think that it's really easy to think about, Oh, what could I do and how could this work and lose sight of the [00:07:00] customer context and what would actually make sense to us. As consumers. Would it actually work? I think we're next best, like easily makes a ton of sense is like a B2C use case or maybe like a really low value PLG motion where it's you know, I use Canva kind of on their free plan and then they like give me a discount and I'll go to the pro plan and then I'll go back.
You're like, so. Perfect for that. They can probably tell when I'm on the edge or if I'm going to buy a pair of socks, you know, they're going to discount ladder me and we've talked a bit about that in an earlier discussion, Phil, but when we think about a 50, 000, I don't know, ABM platform, or if I think about the product that I'm working on day to day, which is a learning management system, it's complicated.
People have, Existing platforms. They're in contracts. There's decision committees. There's multiple people. So it's not like we can just, Oh, send Susie and HR an email and it's going to tip her over the line. And, you know, she's going to sign the doc you signed today. Think there is an interesting discussion to be had where if we can [00:08:00] identify if I think about the sales cycles that I've been in, where I'm comparing different products, there are certainly moments where if somebody could. Hey, just tell me a bit more about this feature, because I'm hesitant here, or tell me a bit more about this particular use case, because I don't actually think you're strong, and maybe you are, so maybe it's where we tie in the gong calls, or we tie in, you know, the digital body language, the things that they're doing on the website, the things that they seem to be poking around in, but I think it has to get really good at that, if it's just you We detect that like this ebook seems to work well, you know, across 10, 000 people.
I don't know if it's actually gonna I think the propensity part is different because. You know, we've been doing like predictive scoring for eight, nine years. I think that's pretty well established. And you know, in terms of fit, in terms of behavior at that level from the QA point of view, like it's really hard.
I, I shared something like almost as a joke the other day about an outbound email that I'd gotten. That seemed pretty clearly AI generated to me because it was like. Hey, we're looking to reach out to [00:09:00] people in Mississauga and I hope I didn't disturb your run around Lake Aquitaine, which is like a trail of evidently near Mississauga, which is like three hours from where I live.
So it's like hyper specific, hyper precise, hyper inaccurate okay, this is what the future of AI will look like without QA. So to your point, I think right now there needs to be a human in the loop. It's too much of a brand risk to not have that. How do we thread that needle? How do we do that at scale?
I don't know. But I think I currently would be judicious in approaching it with anything that could be really evidently wrong or create that sort of negative brand impression.
Jon Taylor: I kind of have a follow up question a little bit off the script, but something I'm seeing in my network is more and more people coming out and talking about like the pain, the emotional pain of this forced adoption, right? AI is in such this hype cycle where everyone is like thinking back to my earlier question, the fear of missing out.
But what you highlighted there is like these systems, especially in my own experience of using them daily is [00:10:00] they are powerful. But they don't always hit the mark. In fact, sometimes if you're relying on them too heavily, it actually diminishes the human element so much that it becomes plainly obvious of what's going on.
So in terms of the emotion of a technology, I think this is unique in this space. And maybe you agree or disagree, but this sense of forced adoption, like everybody's got to use this or I'm going to be left behind, versus the actual reality of bringing this technology in. All right. I actually see a lot of wisdom in the approach that you outlined earlier of this kind of concentrated slow rollout.
What are your advice for listeners who are thinking through this themselves and trying to find the right balance of, you know, being early in a curve that seems to be moving quickly versus actually getting results out of these tools?
Justin Norris: Yeah, that's such a good question. Cause I think there is a big Delta between what it can actually deliver. And what people are talking about in the list of use cases that I put [00:11:00] together, I tried to capture that with a a feasibility score and then an impact score. So there's some things that are highly feasible today.
Like you can write your LinkedIn posts with AI. I don't think that you should. I don't think that's what people want to hear. But then in another case that you can create images with AI today. And and you and I think that you can and you should because you can produce some amazing work and so what I'm doing at least is evaluating it all, looking for where I really could use some help. One of the things that we'll talk about, but you know, processing unstructured texts at scale, like that's really boring
to do. As a human so where you actually feel like, Oh, this could actually help me. I like that or where I'm seeing opportunities. Like I've done more with AI imagery because of the work that you guys do where I'm like, wow, there's some really cool things that are possible with this. It's way better than going to Shutterstock. Sorry, Shutterstock. But
you know, so I'm going to do that. Cause it's to my benefit versus like, all right, everybody like take your medicine, do [00:12:00] let's do AI. Cause we got to.
So I'm just kinda letting that, that need drive it organically.
Phil Gamache: Yeah. I think that's great advice. I want to pull on that thread about converting unstructured data into different systems. You made a recent post about. Using different AI tools to pull out qualitative insights from your customer calls. I think Gong's a good example of recorded calls, but there's a lot of different tools that do similar things out there, but I thought it was a very interesting use case.
You call out right away that. Quantitative insights have been the focus for marketers for, you know, like decade plus, and definitely guilty of over indexing on quantitative myself. But you gave this great breakdown of how AI could be used to pull out qualitative insights at scale. As a content creator, I often wonder about the future of AI and creativity.
And do you think that these AI applications accelerate, enhance, or even better humanized content that comes from humans? Ha.
Justin Norris: I'm thinking about this a lot right now, and it's hard not [00:13:00] to, I think if we start from the place that the thing that makes creative work excellent is that, are those insights, is that, you know, root insight. And I do think that is a uniquely human thing. to perceive. We haven't gone too far. Somehow there is that whole debate about is AI sentient? Does it have rights? That seems to have died down thankfully.
And I am of the mind that there is something unique and special about human consciousness that doesn't. It's never going to exist in a machine generated neural network. So where I think about AI is it's like a paintbrush.
Except if you could give the paintbrush verbal instructions and it would interpret them and do the painting for you.
And so where does that leave us? I think it can certainly accelerate. And automate and make scalable human behaviors that are tedious and manual and repetitive. So instead of me going through each gong call and like looking it up and okay, and putting it in a spreadsheet and doing it again and again, AI is going to scale that out.
So that's super useful. Scaling [00:14:00] it, like looking over a big corpus of texts that would take me time on its own. I could even analyze those insights and summarize them. I still think there's work to do for me. As a marketer, as a creator to like process that and come to conclusions. And I had on my show, one of my old bosses, Mitch Solway, who is really more of a he's got a lot of skills, but really messaging and strategy is like his thing, and he talks about how he arrives. At a message through interviewing customers, and there's this magic and like spark and like fire in his eyes when he talks about it and how he forges connections and it's almost intuitive or subconscious, however you want to think about it. I don't think an AI is going to replace that, but it can scale a lot of the work that goes into that and then it could can be a tool. And I don't know why I feel differently about imagery and about writing, like I feel a real resistance, not like the summarized writing, like summarizing something, but I never want AI to write my my LinkedIn posts. I never, I want that to be me. Why do I feel that way? And why do I feel okay using it as a tool to create images?
I haven't fully squared [00:15:00] that in my head, but somehow I do feel that they're different. I don't know if you guys think that too.
Phil Gamache: I agree. I think it's capability right now. Like you can tell when people are using chat GPT to comment on your posts. If you use it enough, you can spot the things that just repeat themselves, like in
conclusion, in summary, like I do believe there's just like a bunch of shit.
That's just you didn't write that. There's robotics in the
word choices there. And it's there's lacking human I dunno, the way I like to write posts is when we do a launch for this episode I try to think of how would I message my friends or my family about this episode. And I literally write down my posts on my phone and I'll message that out. And I want it to sound authentic. Yeah, in the early days of chat, you feel like I played around with it. Help me write this post based on this summary and it's garbage. I just don't like the output at all.
I don't think the capabilities there, I'm sure maybe JT, you can touch [00:16:00] on the custom stuff you're building
and like how you can tweak some of your prompts and you know, you can eventually get to something better, but I agree, Justin I think. The image is just better. And I think that Delhi is still quite a bit away from mid journey. We're talking a lot about comparing those two tools, but yeah there's just something special about being able to prompt mid journey and see the output and just be inspired and engaged by the image. And yeah, I think capability on imagery is just like far ahead of writing LinkedIn posts anyways.
Jon Taylor: I'm gonna, I'm gonna come in with a little bit of a different angle. I will share the discomfort I have around it, but I think I actually somewhat have that discomfort sometimes with the imagery. And I have to say this as somebody who uses. mid journey extensively. Phil's hooked me to it. He's got me paying way too much money for mid journey every month to generate images as stress relief sometimes even.
But I think sometimes it comes back to the heart of AI itself, as it's the inputs. And so what I've played [00:17:00] with, okay, I have an unfinished novel like every other writer in the world, and so I prompted it with a ton of my writing. And what freaks me out is that I use transcription, voice transcription, and say, now write me a section.
It's actually pretty good. So, I think that the inputs have a lot to do with what you get as an output. So, if you're putting in boilerplate You're going to get GPT boilerplate out of it. And that we can spot a mile away, but I don't think it's very long until the default styles and I think that's what Phil saying about mid journey versus Dolly is like the default style of mid journey is beautiful.
Like it is an artist. quality content, but when we see GPT writing, it still reads like GPT boilerplate. And, you know, kind of dovetails me into this question I have in around the Gong process that you had as I was reading through and listening to your last response, I was like, all software, including AI is derivative of spreadsheets.
So I see this [00:18:00] like spreadsheet process of going through these Gong calls and analyzing sentiment and telling the machine, this is what. I think this means, you know, words like I hate and you made a really good post on LinkedIn about this, how to leverage insights and pull these out of the qualitative conversations.
Maybe you can talk a little bit about this process, like from your perspective of how to input, tune, manage these AI processes to actually deliver real value to organizations.
Justin Norris: Yet, something I'm actively working on, like automating, so on a one by one level, like you can take a transcript and paste it in a chat GPT or portions of it and ask it for something, but that's not super valuable beyond just reading it, what I was working on, the problem I was trying to solve when I wrote the post that you're mentioning Was we were doing outbound and and I was experimenting with writing some emails and without bound It's all about breaking through right like when you an email shows up in your inbox There's an immediate relevancy gate that your mind applies can I delete this because I'm [00:19:00] busy and I want to delete it I want to make this go away so I can move on or do I have to worry about this or should I think?
About it. And if it's you know outbound is like I'm gonna shoot my shot and if by some miracle I'm relevant enough I might just pass through that gate and get a second of your attention and maybe a second more and a second more. In order to do that, you have to know their pains. You know, follow Josh Braun, he's a big cold outbound, cold calling, sales, email thought leader. And he writes a lot about just really knowing your customer there. So I was looking through Gong, looking at pains looking for their language. What are the things where they're talking about? It's oh, I hate this. This is such a drag or this is the worst or the things if we were speaking to other Marketers and marketing ops pros like we could just know those things through our own experience So it was super useful for that now to operationalize that I was just playing with this last week i've got it to a place with zap here, which is just the workflow tool that I have Where I can, you know, create a tracker and gone for the thing that you care about, whether that's, how did you hear about this information?
Whether that [00:20:00] is pains like description of your customers pains or the competitors, whatever, save that as a search based on that tracker. And so you can use that as a trigger for zap that's built in so far. The next part, which isn't built in is you need to go and retrieve that transcript. So for that, you can use like an API.
Or a webhook rather in Zapier to go and ping the Gong transcript API, and it will return that transcript to you. So, I am here in the journey, that's like as far as I got. The next step is to actually feed that transcript into an AI in an automated way with a prompt, and and get back a response that summarizes it, and then do something with that.
Whether that's pushing it into a Salesforce field or HubSpot field, or pushing it into a spreadsheet to analyze. And I'm actually just blocked, literally at the moment by the fact that I don't have access to the subscription credentials to plug in to my Zapier account, so I'm just working on navigating that internally. But the built-in AI by Zapier action seemed a bit rudimentary, and I may just may not know how to use it properly yet. But you can certainly, and I've seen it, [00:21:00] have a chat GPT discussion within that flow and return things back. And so that's what I'm, that's what I'm aiming to do. I think it's totally achievable, but I'm still kind of working through that.
Phil Gamache: I'm looking forward to the LinkedIn post when you get to finishing that process. And yeah, maybe we can package that up and use it for other use cases. Cause I think it's super powerful. I think this idea of being able to. Bot specific things that you know are valuable to you from a messaging standpoint. But yeah, I, I definitely agree with this idea of APIs are opening the door on a lot of automated use cases for chat GPT and using tools like Zapier almost bring me to this idea of how can we go to this composable world where we're using Zapier to hook up different tools and including our own unique custom data sets to chat GPT and instructing it with better information.
Right. You wrote about this sweets versus best of [00:22:00] breed discussion. We talked about this on your podcast too. I like to call it platform versus composable. Being a Marketo guy yourself, I was actually pretty surprised to see that the, your conclusion there was that you actually favor best of breed tools but you accurately called out the challenge of integrations with composability and best of
breed. What's your solution for tackling the age old problem of endless lines of API integration overload?
Justin Norris: Yeah. So it's really funny that you say that in a way, and I can see where you're coming from, but I've always considered myself a best of breed guy, like for as long as I've been doing this. But I think the meaning of that has changed. And we talked a little bit about this before, but where it was like a suite meant that you were going to buy all SAP or all Adobe or all. Salesforce. So you're going to use Salesforce and Pardot instead of Marketo and Salesforce. So best of breed because you're using two tools from two different vendors because you like their functionality. And I think what's changed [00:23:00] is just that the ability to break that functionality down. Into smaller and smaller point solutions and microservices and have them talk to each other and give the average Martech team, let's say the slightly above average Martech team the ability to, you know, connect those dots together on their own has increased. So greatly The thing that I think about there there's no perfect mops leader or there's no perfect CRO or CMO, like considered in the abstract, there's just a perfect person for a specific job in a specific company at a specific time and place. And it's the same way with tech. I think there's no perfect tech stack. Everything is about context. Everything is a cost. Everything is about trade offs. And so you have to consider if you're a company of 10 and you know, you have a quarter of an FTE working on Martech I don't think you're going to go and build a big composable stack. Cause it's this is the best, you know, it wouldn't make sense in that business context. And in other contexts, of course it would. And you've lived that I know Phil from our past [00:24:00] discussion. So I think to me, ops is a game of. skillfully managing those tradeoffs to achieve an objective. And you have kind of one eye on the present and you have one eye on the future. And I feel that with the rise of easily accessible APIs, the rise of microservices, and most importantly, the rise of workflow automation tools. Has changed that game and made it easier than ever. So long winded way to answer your question about APIs, but I think having a centralized place to to manage all of those connections means you don't necessarily need to have your marketing automation platform be like your workflow source of truth, your CRM, be your workflow source of truth.
You can have. An independent workflow layer. I have the most experience using Workato. It's my tool of choice for that. But that could actually be your place where you have all your logic about data value updates and who goes where and who does what. And all the if this then that stuff. And then you have your systems of execution that are sending emails or sending text [00:25:00] messages or booking calendar invites or the interfaces where your sales team is working to do their stuff. And I think that can be managed quite readily. With air handling and troubleshooting and without making you go crazy, you know, with a lot of custom development. So I think we're there. And if I was building a stack today. I probably would look at that at least at the first place, you know, depending on the context that I'm in.
Phil Gamache: Yeah, very cool. I think that the rise of web services and even like this idea of warehouse native MarTech tools, where we're relying on activating data that is in our warehouse. Like I think a lot of folks listening to you about the, a quarter of a full time data engineer on the team, like that was too real of a statement for me from past gigs there.
But yeah, one thing with countless. Integrations that I struggled with was actually like figuring out how we activate the data that lives in the most important places so that we can have it in our marketing tools. This episode is actually brought to you by our friends at census. Census is a data activation platform.
They're [00:26:00] used by cool companies like Sonos, Canva, Crocs, Notion, and a bunch of other ones. We use them at my current startup. And as a customer, I've experienced the magic of using a reverse CTL tool, They allow me to have this no code audience hub. That's very similar to a lot of marketing automation tools where you can build segmentations of different customers and different data points based on data coming from your data warehouse.
And you can then create highly personalized journeys in your marketing automation platform or like your Google ads platform. We're actually running a really cool contest with them right now. If you'd like to humans in MarTech. Graphics and you want your very own image. We're doing this monthly raffle with census for a personalized t shirt designed by us.
So you can enter to win at Ken census. com slash humans. But Justin, I'm curious, like what are your thoughts on this shift of this warehouse native centric approach to MarTech where, I don't know if you remember, like CRM was kind of like all the data needs to live in the CRM is the source of truth. I [00:27:00] feel like we're definitely migrating to the warehouse being. The source of truth like what lives in Redshift or your CRM? What are your thoughts there?
Justin Norris: I'm already on that website filling out that raffle application to get my t shirt. I'm not even listening and I'm just joking. Same answer obviously about context and things apply. I think one of the things that shaped my thinking about this, and I haven't really seen this happen in reality yet, but there's a concept called a data mesh. Which was popularized by an engineer, a data engineer named Zamek Degani. And the idea there was that any kind of centralized process can become very fragile, very rigid and breakable and has a tendency to not scale. And the things that the concept of a data mesh maximizes for is having local data stewardship and kind of universal or ubiquitous data consumption and data [00:28:00] availability. So the notion there being that like, all right, if you try to like. Like a centralized warehouse, I think, is a good step until you reach a certain size. And then at that point, you have, if you think about Spotify, this was the example in the article I read, you know, you have lots of different sources of data on Spotify. You have like artists and playlists, and then you have stuff around activity, like how people are engaging with those things and users preferences. So rather than trying to store all that in one place, Like you have the teams that are working on those topics, owning their data and making that data available for consumption via an API to the rest of the company. So in a sense you have multiple sources of truth, but those sources of truth are
domain specific and kind of bounded.
And hopefully I'm not mangling Zamak's concept here, but this is like my understanding of it. And that really spoke to me because what it means is that as a data consumer, you have access to everything. Via an A. P. I. That's like really standardized. And what it means is that as a a data governor or a data steward, [00:29:00] the people that are managing that data are the people that are the closest to it. Now, this presumes an organization of a size and scale where you have. data warehouse. In particular, it's a system where, you know, where you have a particular portion of your data, the data warehouse, where you will run into challenges. So, for example, how do we send out a data resource, the data warehouse, to the cloud?
You know, in every you know, global platform, you're still going to have, you know, you're going to have very limited data resources. Your data warehouse administrator doesn't necessarily understand the nuances of the data that they're ingesting. And so there has to at least be a really strong partnership between kind of a data steward on the functional side.
So like within marketing or within sales, probably in the ops team working with them on like schema structure. How do we make that data accessible and available? Because otherwise you just, you get. It doesn't live up to its promise. At least that's what I've seen. And I think, I mean, Phil, you've gone further along this composable warehouse first journeys.
You've probably seen more of those those pitfalls, but I've lived those things as well.[00:30:00]
Jon Taylor: I kind of have a follow up question and it's been building in my brain for a little bit. So this is always a dangerous moment on the podcast when I have a question like this feels laughing and nervously already.
So
Justin Norris: to say, What's he going to say?
Jon Taylor: So Justin, like from.
From my own perspective, I've been out of marketing operations for a little bit, but I'm still very much in marketing technology. Some of the things that you guys are talking about here, like the connection between APIs the different tools that you can use to connect data points, marketing technology, Is kind of this bridge between the really technical aspects of what we could do in marketing from you know, programming data connections web interactions and so on.
But also like this data analytics component. I'm just curious. And in your perspective, like 510 years from now, people listen to this cast who are, you know, in school or in the beginning of their career. What type of skill set do you think is going to [00:31:00] be? The standard operating procedure for like top tier marketing technologists or top tier marketing operations professionals like should folks be learning Python, JavaScript, should they be using data analytics?
Like what's your read on the current trends?
Justin Norris: That is a dangerous question. I I increasingly, so if we look back to the beginning, I was like a first principles guy. You can probably see the pattern by now, but I like to think about it from the beginning. And what was marketing ops? It was like demand gen marketers. With tools who sort of knew how to use them. And then they're like, Oh, I like this system stuff. It's like more fun than posting ads on Google. So I'm going to just do that more. And it was kind of my journey. This is actually really cool. Like I'm just keep doing it. And, and then it all kind of got wrapped up together. And I think then where it's taken us is to a definition of marketing ops.
That's still very tech centric where tech is almost synonymous. Like we have martech marketing ops. What's the difference. It's very different in [00:32:00] sales ops. So even in my company, we have a sales ops team and then we have a Salesforce. Team then they were salesforce reports in the sales ops, but they are distinct. And I think what we'll see increasingly and my friend, Paul Wilson, who I know you folks know, he's been on the show. He predicted this to me a few years ago, and I think he's absolutely right that we're going to increasingly see like a split between marketing ops as a business oriented function that like runs the business.
And then I think also manages. Performance management, some aspects of strategy, like really managing like the business of marketing from an operational point of view, see a split between that and a split between systems and whether systems live still within marketing, like it's kind of functionalized and distributed or whether it's one monolithic business systems with different things.
I don't know if that matters as much but I think the insight is that it's hard to do. Both. It's really hard to be like, I'm the product manager of this instance and taking features and doing this and I'm doing like all of the daily troubleshooting and run work [00:33:00] and I need to think about like KPIs and I'm challenging my team on performance and helping shift direct.
The things that I think really good ops people should do. It's really hard to do everything. So I think obviously within smaller companies you will, because that's the nature of being in a small company. And if you like that, you will work in a small company and you'll do that. But I think the, yeah.
It's a definition of the skill sets. I think they'll start being multiple paths. Are you going to be like a product manager of a set of platforms? Are you going to be a developer? Are you going to be more in like performance management ops? And so people will be able to choose those paths and maybe float between them.
But the idea that a, that marketing ops just synonymous with tech, I think that's really dying. I think so many people talking about that more and more. And then the, just that one person can do them all. I think that will also start to fade away.
Phil Gamache: Yeah. I think the startup folks listening right now are just like, what, like how can you split those things and then the small team that we have, but haven't had a stint [00:34:00] at a bigger company. And I know when you were at Percudo, you got a taste of working with with enterprise as well. And it's it's easy to see. How convoluted a lot of the system stuff can get. And yeah, I think Paul is this right on the money there on, on the future split. JT, I know you want to get into some fun topics
here. There was a really fun sci fi Star Trek
post that Justin posted.
Jon Taylor: yeah, so I'm, I was introduced to Star Trek. I didn't have a choice about being a Trekkie, so, forgive me at the outset. It was indoctrinated in me, but you kind of made a fun post about this, right? Star Trek predicting so many technological leaps, and I think Star Trek, if you've watched it, is obviously It's got everything right from the the AI on board and yeah, just so much from there.
I think that in most science fiction, however, we don't get to see these transitional moments, right? Like we'd never get to see them go from having, you know, holograms, or the ability to transport instantly what are the effects on people, the normalization of the technology? [00:35:00] It's always oh, this is so normal.
Obviously we're back to this AI question we can't escape ourselves from in this moment, but how do you think we prepare ourselves for these kind of quantum leaps that we see and can forecast? I think, regardless of AI's ability to execute today, I think it's probably pretty fair to assume that it's going to be a game changer in the long term, we're going to have AI integrated into our future tech stacks.
So what are you thinking that folks should be doing to prepare themselves? Kind of a follow on to my last question as well.
Justin Norris: And it's funny you mentioned about those transitional moments because I haven't watched all the Star Trek series, but I am like, I think the next generation was like my intro point to the, to Star Trek. And then I watched discovery and then I watched deep space nine which I really loved and then deep space nine ended and I sort of felt a vacuum. Of what am I going to do? And I'd never watched the original series and I kind of started. I just, I haven't actually been able to fully get into it yet. But just the other day I started watching Enterprise, which I haven't seen [00:36:00] before. And it's funny because it sort of covers this transitional moment.
Like it's about the first Enterprise, like where. You know, I think they're like, we're going to go like warp 4. 5 or something. And that was like a big thing for the
non Trekkies in the audience. You know, later ships are like at warp nine. So this is like a very early thing. And, you know, one of the first real interstellar spaceships for that society. And and so it was kind of cool seeing. You know, them grapple with coming out of their infancy as a star faring civilization and moving into, you know, taking a seat at the grownup table with all of the other you know, cultures that are out there, the Vulcans and the Klingons. And so maybe we're, I mean, we're, that is a much more monumental kind of shift than what we're going through, you know, with AI, but there could be something similar.
I think we're probably grossly unprepared. As a society and will, you know, continue to be and I think as individuals we can probably prepare ourselves, but ultimately, you know, all of the technology is a reflection of [00:37:00] the people who use it and collectively the society who use it. And we can see that with all the tech that we have that can do good, that can do harm to people, to places. So I suppose this is a kind of dour and pessimistic answer to your question, John, but I sort of think you know, as an individual, I guess I want in the, I did an episode on AI, just a solo episode and it kind of ended it with this like comparison of how this technology is used in Star Trek and how it's used in WALL E, like in Star Trek where, you know, it's a tool, it enables people, you know, they're like, computer, give me a diagram so they're using the AI, but they're still, the human is very much like just solving the problem, the computer is just taking them to the next level. As a thinker. And I really love that. And then there's Wally, where, you know, you're like an adult baby sitting in your floating hover chair and the AI is just keeping you in this state of, you know, inactivity and mindlessness. And I could see us getting, that's what worries me a little bit about some of the GPT, write me [00:38:00] a blog post about XYZ and create all the socials and just do all my work for me. If we stop thinking, we're going to, we're going to lose that. So I suppose there's a certain vigilance that I personally am maintaining that I hope other people maintain about let's not get lazy. And I think market forces will to a certain extent be self correcting there because I don't think those things will work very well. If they start working well, if you can finish your novel with AI. That actually kind of scares me,
to be honest.
Jon Taylor: well, and just from my own experience, it's not finishing the novel for me, but prompting me through the most difficult bits, the parts that I got stuck on. You know, it just jumped in my mind as you're speaking. The idea, the paradox that we have in our society for almost all technology is our ability to use technology is almost completely abstract from our ability to understand it.
And I think like this. You don't have to be a decent writer or understand the concept of storytelling to go to GPT, follow some influencer's prompt, and create Phil and I have been joking about this prompt I found on LinkedIn, and it's, [00:39:00] I don't know, it reads like a lot of LinkedIn posts. It reads pretty decent.
It's still GPT boilerplate, but I think that ultimately, just to wrap it up, It's up to us, right? Human beings are getting to choose how we adopt this technology and put this out on the market. And I think individuals like us actually have a role to play in how do we apply this technology and move it forward.
Justin Norris: evangelizing, I mean, that's to the extent that, you know, anyone who's listening to me, that's what I'm trying to do as well here's how I think it's a good way to use it, here's what I do.
Phil Gamache: I love the Star Trek Wally comparison. That admittedly not a, a big trekky myself. I think that it's just overwhelming, like all the ways you can get into it and start watching. And I tried but got stuck. But I am a huge Wally fan. It is, it's one of my all time favorite movies and yeah I think about it a lot for AI and who's.
Who's going to be our future Wally to save future lazy generations from [00:40:00] being able to like problem solve and not dehumanize a lot of like our problems there. But I want to stick to the TV show theme for a second, Justin your post, I got a good laugh from you was your suggestion to have a shark tank kind of competition when you are shopping for MarTech tools. I think a lot of the folks listening have. gone down that road of, all right we need a new reverse CTL tool where we need a new market automation platform, go on websites, you fill out demo forms. And then for the next month or two, like you're just chatting with sales folks and jumping on different demo calls and you're comparing notes with different tools and different tools are using different words to really mean the same thing and different features. But it would be hilarious to see. Like you suggested vendors argue in front of each other and how drastically similar some of the pitches would be. It almost made me think of like this RFP process where in government in Canada, and I think it's like that in States too. Instead of going out to [00:41:00] vendors and saying Hey, I'm kind of interested in this tool, give me a demo of it. The RFP process is I have a need for this. A solution to this problem. And then you just put out the RFP and then vendors bid on maybe they do demos and stuff like that. So it would be cool to kind of flip that. But one thing I thought about that I wanted to ask you is Juan Mendoza, a friend of the show build something.
Along those lines, a little bit last year with the TMW 100 I had the pleasure of being a judge and and the vendor applicants had to fill out a pretty rigorous set of questions and it almost felt kind of like a sales pitch as a judge, like reading through it and stuff like that. So maybe you should collab with with Juan next year and turn TMW 100 into a Shark Tank pitch fest.
Justin Norris: That's really cool. It amused me, the extent to which people responded to that post and was like, oh, we actually did this, and some people were like, it worked, some people were like, it didn't. And even that sales people were actually into it too,
Cause I think, I mean, [00:42:00] haven't gone as deeply down the sales process rabbit hole, but I think as a buyer and then to an extent as a, as an operator that, you know, interacts with our sales team I think there's just so much work that needs to be done around sales process, like not just optimizing it within the existing framework, but totally new frameworks, totally.
Yeah. Disruptive paradigms because it doesn't work well, waste a lot of salesperson time. It wastes a lot of buyer time and there's so little trust. I don't trust any vendors anymore. I don't trust anything that they tell me not because they're bad people or because they lie, but because a sales engineer can spin anything.
I have done the job of like. These are all my use cases. This is exactly what I want. Tell me if you have it. Give me links to your documentation. And yet you buy the product and you end up there and you're like, Oh, like it isn't what I thought it was. So I think ultimately that's where PLG comes in.
That's where Pilots, where Trials, like you want to get your hands on it. I want to see what it can do. [00:43:00] The Shark Tank thing would be another way potentially around that. And at least that would cut down the back and forth time. And you know, they're trying to plant landmines and create fear, uncertainty, doubt about each other. So if you know, demand base is well, 6 cents can't do XYZ. Then at least the 6 cents person is there. And it's actually you know, you can almost an adversarial court process where they could hash it out and maybe get to truth a little bit more quickly. I'd be totally open to that, but no, as a buyer, it sucks.
It just sucks your time sucks the life out of you.
Something needs to change. I don't know what the right answer is though
Jon Taylor: Yeah, I love that idea, the, you know, just as a tag along to that, like this idea of clarifying what the category actually is, like sometimes you go into these Martech, you know, conversations like, what the heck am I even buying? Everybody says something totally different. You know, speaking of hands on experience, one thing I saw you joined a live mops huddle on marketing ops horror stories, and as tempted as I am to turn The Humans of Martech into a weekly confessional with folks you know, I think that a lot of us have made [00:44:00] mistakes, grievous errors, sending to entire databases, not naming names, John.
But I want to ask your advice for people trying to recover from a mistake. I think this is something common in mops. I think it's common in marketing to make an error, whether it's email is really the typical one because you can't take back an email that you send. I think there's two parts to this question.
One part is, how do you stop fear from paralyzing you from doing cool things? And second of all, how do you recover from a mistake?
Justin Norris: on the fear question. I think part of it is just I don't know, part of it is just like going with the flow. Like I when I was on the agency side, I would sort of tell new consultants like, listen, I've made bigger mistakes than anybody and I'm here and it's fine. And I think if you have enough talent and you get enough wins, then you earn some credits in the bank that cover your mistakes. And just normalizing those mistakes, meaning that they're okay. There's a big difference between. Errors out of carelessness, errors out of lack of foresight, errors out [00:45:00] of just simple lack of like caring of you know, of giving a hoot about it. And I think those are damning errors. And then there's errors like we didn't know what we didn't know.
It was my first time, etc. And so having a culture and an environment that normalizes that, that's yeah, you're gonna, I mean, Apricoto actually was one of the very first things, I think the first day I started. The COO, Yousef, said to me you know, you're gonna take too much time, if you try to sweat over the perfect architecture.
So, take an hour, think about it, and then decide and move on. It's okay, that's the culture that we're in. That's perfect. That's the perfect environment and you will make some mistakes, but you have to know that's going to to happen. So making that rule for yourself, having a culture like that, if you're in a culture, it's like no mistakes allowed. And I was just reading about this and in a book the other day well, then you're going to get people that don't take any risks and that are very cautious and you're going to wonder where's innovation in my company. Well, it's because you said I want no mistakes. You can't have one without the other. So I like to work at places where there's that healthy [00:46:00] balance, and where innovation and some level of mistakes are okay. Now, when they do happen the most important thing, I think this is universal, is just to take responsibility. Because, you know, nobody, there's something very disarming about saying, I messed up, it's my fault, here's what happens here's what happened, here's what I'm going to do about it, and I will keep you up to date. And people are like, Oh, okay. It's almost almost not only just brings your credibility up back up to zero, but even sometimes a little bit higher because people can see that you're really taking responsibility and here's why we did it. And you know, there's just that level of ownership. If it's every week on the same thing, we'd screw up again.
I'm here to tell you, okay, that's not good. But every now and then or on a new thing. That's normal. And that's fine. So avoid the urge to explain it away, to be like, yes, we want to say, but actually, and it doesn't help. And just being like, you know, I try to do that. And and it just served me well,
Phil Gamache: Yeah, love that answer, Justin. You kind of talked about this also on your You talked about the field blocking [00:47:00] Moops that you had on the RGA podcast Probably a couple of years ago. An oldie but
a goodie.
Justin Norris: about that one. Oh
Phil Gamache: But one, the one thing that I loved about kind of like when you were unpacking you know, what happened, kind of what you were doing after it was your rule for always in the future now having another person doing your QA and folks listening might just be like, yeah, well obviously someone's going to.
Someone else is going to QA the email that I write. But especially in startup, when you're moving faster, even agency, when you've got four clients kind of on the go, it's so easy to just like QA your own stuff, but you've been in there for multiple hours, you're looking at the same workflow, the same smart rules, the same flow steps, and even as detail oriented as you can be, it's so easy to just miss something that would be super obvious for someone who's coming in with a fresh set of eyes, just got up, is drinking a coffee. But yeah, we would love to hear your take on that. Sorry to bring up all [00:48:00] the bad memories.
Justin Norris: no, that one was terrible. That we were literally had to build an entirely different software application to reverse all the data changes and. Thankfully, we could. I think the having a peer do the QA is an ideal and an ideal is not always achievable, obviously, like in an agency environment, it was a rule that worked really well.
And QA was actually a great job for more junior consultants that were getting started to come in to have this structured environment. And you learn so much through QA, you learn so much through troubleshooting, you see how things are done, but you're not responsible for building it. So it's a perfect experience. Kind of career and skill progression role in my team, you know, we try to do that as much as we can, but it's not feasible. I have a team of three and the two people on my team, you know, work in different areas. And it's I can't always be a bottleneck of going into QA their work. So we have self QA, but at least having the the responsibility of doing the QA and documenting the QA, which for me means. Yeah. Having a set of unit tests that are documented. Here is the expected result. Here's what happens. [00:49:00] Here's the test lead Just having some of that rigor That's what I really liked about consulting to be honest Was that there was this feeling of like professionalism and rigor and how you do things documentation qa discovery felt good kind of like almost a uniform that you wear and so trying to bring that in, even at a relatively small scale up to bring that in as much as we can and have that culture of professionalism.
So I think you can do that even with a small team.
Jon Taylor: Time flies when you're having fun. We're to our last question. The inevitable last question we ask all of our guests, Justin, you're a podcast host, a speaker, a sci fi fan, guitar player, owner of three dogs, a homegrown gardening ops manager, which we just invented that term, but I love it. How do you find balance between all the things that you're working on?
How do you stay happy, successful and well balanced in your career? What's your approach?
Justin Norris: It's pretty hard. I would say anyone says that you can balance all those things easily is either amazing or I don't believe it. the [00:50:00] way I'm, the way I'm thinking about it is that there is a season. You know, for everything, there's a season for career. There's a season to start a family. There's a season to start side projects.
There's a season to pursue your passions. Some of those seasons can overlap obviously at various times, but you have to know what you're doing. So right now like last fall, I started RebOps FM and you both know what's involved in starting a podcast and trying to make it successful and committing to publishing weekly and promoting it and all of those things. This is a huge thing. So I'm like invested in that. I think the key thing is that you have to be doing something that interests you. And that is providing a channel for something that you like want to express or want to develop or want to grow. So for me, it's I really enjoy doing it. Like I really do, you know, and obviously you do too, because we wouldn't be doing this.
Otherwise you've been at it for three years, but it's awesome to learn. It's awesome to Hey, like you smart person who knows something that I want to know, come on my show and I just going to ask you [00:51:00] questions for an hour and learn whatever I can from you for free. That's amazing. And even if nobody watched, that would be a really cool thing to do.
So, so I like it and now there's, but there's limits. So something I recently did was was bring on like a producer on contract, someone who edits the show and I can kind of work with on that. And she reached, actually just reached out to me like a cold email, but her email was so spot on. I think you understand me. And then for a long while, I was like, you know, I'm a perfectionist. And it was like, I don't know if I can trust someone else to edit the show, but just the first conversation I'm like, Oh, this person actually gets it. She had a background in radio. And so, I'm now in the, you know, going, I've gone through a few episodes of doing that.
She edited your last episode, the episode with you, Phil, that I just posted. And it's working out. And it's yes, not every little thing is exactly like I would do it, but. I get some of my weekends back and I also have time to now do things like if I want to develop YouTube as a channel or if I wanna just post more on LinkedIn to promote [00:52:00] my stuff.
Generally, like you, you reach a point where you have to do, bring in help, know what you're going to outsource so that you can do other things. And so that's the realization for me, it was like, actually, it's impossible to do well. So figure out what you're not gonna do and and keep yourself happy that way.
So that was a step that I took.
Phil Gamache: Awesome advice. Yeah, we'll have to take some of that ourselves. I would like to get some of my weekends back, but yeah, we are outsourcing a bit more. Yeah, we'll we'll take that advice to Green Bay. Appreciate your time, Justin. This is super fun. Time flew by. Yeah, we'll love to see the future of building RevOps.
fm. Really excited to have another podcast in the similar kind of field there, but yeah, really appreciate it, man. Thanks for your time.
Justin Norris: We'll chat again, thank you guys so much. It was a lot of fun.