Welcome to The Outpost, where customer marketers trade what works.
This is where your peers are sharing their best customer marketing and advocacy plays –– the ones that get their CEO's attention.
Join us every other week for new episodes. And to join one of our sessions live and unlock the content that will define your next best play, mosey on over to userevidence.com/outpost
[00:00:08] Jillian Hoefer: Hello everyone. Welcome to The Outpost. I am Jillian Hoefer. I run content here at UserEvidence and boy are you in for a good one today. I always say we do prep calls with our presenters the week before and I’m always so hyped up after we do the prep just because I get to kind of get a sneak peek of the presentation. And today is a doozy in the best way possible. So we have Emily Coleman, who if you’re on LinkedIn remotely in the customer marketing space, you know who Emily is. She’s the Senior Manager of Customer Marketing and Advocacy over at LaunchDarkly. And she actually, I’m just going to skip to this next one real quick to show you the reason that I even … Oh, and I am at the end of my slides again. This is the second time I’ve done this guys.
[00:00:52] Jillian Hoefer: When I saw this on LinkedIn a few weeks ago, Emily had posted about revamping their customer evidence library within Highspot and she said, “Hey, I’ve been working on making sure that I can attribute what this project has meant and disseminate it up to leadership.” She went on LinkedIn and she shared the receipts. She shared some incredible stats around this program that she did. And as she said in this post before, usually attribution around customer evidence is this white whale. And so what she was able to do is build this attribution model for this project for customer evidence that she was working on for the library. And she was able to go to the leadership and say, “Hey, with this project, reps were 1.7 times more likely to mention customer proof on sales calls if they viewed it in the library.” Top consumers in their enterprise sales segment were able to discuss proof points 61% more often.
[00:01:45] Jillian Hoefer: The reps who attended training on how to utilize the proof points within Highspot engaged with two times more unique assets. So she cracked the code. She did attribution and created this model for this project she was working on. So when I literally slid into our LinkedIn DMs and I said, “Hey, can you talk about this project with our audience over here at The Outpost and the community? Can you just share maybe not even your formula, but your philosophy around it?” And boy, when I tell you today is going to be a really good one, she almost approaches it like a data analyst. So we’ll be talking a lot about the actual tactical of how she built out kind of the data analysis flow and connected the right pieces and tools for this specific use case. But then she’s going to take it a little bit higher level for us at the end and just kind of give you the philosophy so that you can take this and build this for whatever programs you’re running, regardless of the tools you’re using, regardless of what you’re trying to report up.
[00:02:37] Jillian Hoefer: This is going to really, really help you, I think, create this environment that you can kind of share something that is super valuable and gives you kind of credit where credit is due for your evidence programs where we felt white whale about it in the past. So I’m going to bring Emily on stage here in a second, but I am going to make a quick disclaimer. She just told me, first of all, it’s a little bit early where she is. And also she told me that she had an asthma flare up this morning. So let’s be very, very kind if she needs to take a quick break as a quick breather or needs to cough or something. She’s not nervous. She promised, right? Emily, you’re ready for this.
[00:03:12] Emily Coleman: Yep. Just doing yard work and got some mold in the lungs. So dealing with some consumption or dysentery or whatever or.
[00:03:22] Jillian Hoefer: Yeah, we’ll keep it best when people. Honestly, let’s say mold is better than dysentery. Let’s keep that at the baseline here. So Emily, I know I told you the reason that I wanted you here was that LinkedIn post that you did, and obviously you have so many other things going on, but that LinkedIn post that you shared, you did refer to customer evidence as kind of, or excuse me, attribution as the white whale of customer evidence and customer evidence attribution. Tell me a little bit why you feel that way. Why has that always been a white whale for you?
[00:03:53] Emily Coleman: Yeah, I mean, it’s really difficult because a lot of what’s happening in customer advocacy is creating content and anyone who’s ever been in content marketing understands that it can be really difficult to track one-to-one how people are engaging with content. And I would also say that especially in the post-COVID years in B2B SaaS in particular, a lot of the buying journey is happening in what they call the dark funnel, which means that there are so many opportunities for a customer or a prospect to be doing activities that you can’t see. And I think that in the era of AI, this is only becoming more significant where it’s going to be very difficult to track those specific signals. And anyone who’s bought software or even bought maybe something on Amazon or something you saw in the TikTok shop knows that there’s never a one-to-one of, “I saw this thing and that made me want to buy.” And so there’s a lot of this research that’s happening before they talk to sales and a lot of it you’re just not able to track.
[00:04:59] Emily Coleman: And so when we’re trying to chase this precision, this like, what is the one asset that’s going to bring buyers, I find that as a little bit of a misnomer. And so the next slide that I got here is instead what we want to do is think about it in terms of trends. Think about it in terms of what types of behavior am I starting to see? What am I seeing that’s different? And this is more thinking about it like a data analyst. So the steps for data analytics is the scientific method essentially. My dad has a PhD in molecular cell biology, so I feel like I’m brought up in this mode of thinking. But the most important thing, and anyone who’s ever heard me present, it kind of doesn’t matter what I’m presenting on. My big thing is you have to understand what it is, what’s the problem you’re trying to solve.
[00:05:54] Emily Coleman: What is it that you want to know? What’s the question that you want to ask? So in this particular case, I just started at LaunchDarkly and I’d interviewed or talked to a lot of different sales, regional sales directors, regional sales leaders and individual AEs and asked them, “What’s kind of the number one thing that I could do for you? What’s the big thing that seems to be a problem for you, especially when it comes to customer marketing or customer advocacy?” And they told me, “I’m just not sure what proof points we have. I don’t know what customer evidence we have. I know kind of what we have on our website, but everyone was doing a different method for how they were searching.” Some of my reps were just straight up Googling to look for proof points, which was interesting. Some of them maybe had a deck that they were looking for looking from.
[00:06:43] Emily Coleman: Some of them were only looking on our case studies page on the website. And so I said, my hypothesis was, if I can gather all of this customer evidence into one place, I believe that that is going to increase the amount of people who will view it because it’s going to be in a centralized place. Pretty simple. Then I wanted to know what data do I need to have in order to verify whether or not this hypothesis is true or not. So we use Highspot for collecting all of this customer evidence together and it has a few different capabilities. Seismic is the same type of tool where at this point, really all I was measuring was engagement by sales. If our reps were using some of the more advanced features like PitchDecks or if they were connecting it better to the CRM, I might be able to tell a little bit about how customers are interacting with it.
[00:07:36] Emily Coleman: But at this stage, I kind of left that out. It’s just how were our reps interacting with that. And because we use SingleSign-on and everyone had an account, that was pretty easy to track. Highspot will let you track exactly what they’re doing, what they’re viewing, how long they’re viewing it, all of that great stuff. The other things I looked at was Gong. We use Gong for our call tracker and Salesforce for our CRM. You want to combine those data sets together, just find them wherever you can. It doesn’t have to be an API. It doesn’t have to be an integration. It can just be CSV exports. You’re going to test it and then report it. And you can kind of do that over and over again and refine, you’re going to get some pretty cool results. So again, my question is, are reps who access our library more likely to mention customer proof on calls?
[00:08:30] Emily Coleman: That’s what I wanted to know. And your template would be, are people who do whatever more likely to do whatever that thing is. And you just need a baseline and then you need to measure your change over time. So again, you don’t need to necessarily go to your data team. Some folks have data teams who have a lot of self-serve analytics. If you do, that’s awesome. I think many people don’t, and maybe many people don’t even have, couldn’t get those permissions if they needed to. So this is where I started. Highspot, that rep content engagement, Gong, that conversational intelligence. Specifically what I did is I set up a smart tracker in Gong to look for mentions of customer evidence in calls. You can do it with a safe search. The smart tracker is a little bit better. I’ll caveat that I would not say that this tracker catches every single one, every single mention.
[00:09:28] Emily Coleman: And it does have, from time to time, maybe some false mentions, but I felt confident in the data enough as I was looking through it that I’m like, “I think this is probably getting maybe 85 to 90% of what I’m looking for, and that was good enough for me.” And then Salesforce is obviously our deal outcomes or what’s happening within the deals where this is mentioned. So then how do you put this stuff together and actually start to run some reports on it? The number one place to start here is spreadsheets. Just this is where you should start. Don’t try to invent something really complicated. Just do some really simple CSV exports, some V lookups. If you don’t know how to do this, things like ChatGPT or Gemini can give you some good ideas for how to build your formulas. And essentially what you’re trying to do is you need to, the most important part here, if you’re going to combine data sets, is that you need to have a key or a piece of data that identifies it’s a unique identifier for either your deal or your rep or the company or whatever.
[00:10:41] Emily Coleman: In our case, that might be an email address. An email address is a good place to start, a customer ID, an account ID, something that’s unique that you can use across different data sets. That’s what’s going to help you really combine that with a lot of accuracy, and that’s going to give you your combined dataset. And even just with spreadsheets, you can do this and you can run some pretty simple charts, some pretty simple numbers, and it’s a really great place to get started.
[00:11:11] Jillian Hoefer: Emily, I’m going to put myself as the dumbest person in the room for a second, just in case anyone else is like me, like I’m five. Can you explain what a V lookup is and then maybe explain what is an example prompt that you would use for ChatGPT if you were trying to kind of sort through the data with that method?
[00:11:29] Emily Coleman: Sure. So V lookups or X lookups are where you will, essentially what the formula does is it looks at your unique identifier, your key, and it takes that number and then you are going to search for that number within your other spreadsheets. So if you’ve got different tabs in your spreadsheet open, you’re going to select in your first spreadsheet that key, then you’re going to go to that next sheet, you’re going to identify where is the column that would contain that number. And when you find that, then it’s going to say, “Okay, when I do find it, what is the result that you want me to show?” And it’s going to be that other piece of data. So a good example would be if I had a spreadsheet that showed maybe the rep, all my Highspot data, and then I had another spreadsheet that had the rep’s email address and it had calls where customer evidence was mentioned in the same row, it would bring in that either call identifier or the place where it was mentioned or the summary of the call, whatever it is that you wanted to bring in, it would bring that into that first tab.
[00:12:48] Emily Coleman: So it’s just kind of bringing in additional pieces of data. If you’re like, “I have something that I want here in this bigger piece of data,” then that’s how you would do an X lookup. Yeah, Google Gemini can build it directly into Sheets and do it really simply. This is something that pretty much any LLM can do with very good accuracy. This is not a very difficult formula to do and you’re not likely to get a lot of hallucinations here. This is pretty simple. So I feel pretty confident in saying, I feel like even with just some basic prompts, you can definitely get this done.
[00:13:24] Emily Coleman: So this is the kind of data I was pulling with a spreadsheet, right? So these are the number of views that we had in our customer evidence library in 2024. We had about 30% of users who were looking at this specific spot in Highspot where all of our customer evidence was. In 2025, that number went up to about 3,200 unique views, and 71% of users had logged in and looked at evidence within that library. So that was the increase that I was looking at. Pretty great data. It shows that people, once we built it, people did come, which was the whole of that first kind of step of the hypothesis. However, spreadsheets are only going to take you so far. The X and V lookup, like I mentioned, is pretty decent. You can ask Gemini some questions. You can also ask it within Google Sheets.
[00:14:22] Emily Coleman: I imagine that Microsoft is coming out with a similar type of thing with Copilot, which as I recall, I think it’s actually going to be using Gemini. They’re going to be using that model, so it’s going to be functionally the same. But unless you know SQL or you have a data team that can do SQL queries and kind of build a database for you, those more complex calculations become pretty difficult and you’re rebuilding it from scratch, you might be missing some more interesting outcomes. And so I pretty quickly moved to using the web or desktop version of an LLM. So you could use any of them, Claude, ChatGPT or Gemini. So these are the ones that like if you’re going to go to chatgpt.com and you’re going to type it in and you’re going to upload your spreadsheet, it’ll be a little bit faster in joining your data.
[00:15:18] Emily Coleman: It’s better at producing these formulas for codes or these, it’s better at coding your formulas for you and it can also write some reports. So if you have a data set that you’ve joined together, you can ask some questions that can push out reports for you.
[00:15:36] Emily Coleman: One of the big reasons to do this is that you can ask questions in plain language and you can ask it some more sophisticated statistical questions. So things like the Pearson correlation coefficient. And I don’t want to get too deep into what that means. If you don’t know, you’re going to have these slides, you can ask ChatGPT. It’s kind of like saying, “Go ask your mom,” because it’s kind of a more, “Go ask your parents what this means,” because it basically means how likely or how much do we think that this would happen by random chance? What’s the probability this would happen by random chance versus how we believe that this would happen because of what we did? That’s kind of the very, explain it like our baby version. You can plot some distributions, you can ask it about outliers that might be giving you incorrect or insufficient results.
[00:16:35] Emily Coleman: So one of the things in my data set that I wanted to know is, does rep tenure matter when it comes to how often customer evidence is mentioned in calls? Because it might be that it’s not that they’re visiting the library more, they just may have more experience. And so they may know about more customer proof points because they had worked with these customers and they know. Or it might be that there’s a difference between enterprise reps and commercial reps because enterprise reps have very long deal cycles and our commercial or mid-market reps have really short deal cycles. And so you want to segment for some of those things and that might change how you pull your reports. It might change the types of information you’re adding to those CSVs because if you can get that stuff in your CRM or you can get it in Highspot or whatever, you may want to bring that information in because it’s going to help you group this stuff together.
[00:17:33] Emily Coleman: Other questions you can ask are like, “What recommendations would you make based on some of these outcomes?” And so this is a little smattering of what I found with using ChatGPT. So we were seeing 46% more proof point conversations with commercial reps, 61% more proof point conversations for enterprise reps who were viewing a lot of this data in Highspot versus those who weren’t. And some of these numbers might look a little different, Jillian, than what you said, because I ran this again really recently. And so obviously the numbers I think I gave you were in January. So if there’s a discrepancy between numbers, just this stuff is maybe a little bit fresher.
[00:18:18] Emily Coleman: So this is all really interesting information. I had a lot more questions. One of the natural questions you might want to ask as a result of this is if customers mention evidence in a call, does that change any of the deal outcomes? Are they closing deals faster? Are they closing deals at a higher rate, maybe a higher win percentage, maybe a higher deal size? And the answer is, I don’t know. There’s no statistical relevance to the stuff that I’m finding yet. Some of that is just that my Gong tracker may not be super great, and so it’s kind of hard to make that leap from where I was starting and really kind of trust that Gong tracker data enough to take it to that next step. Rep experience is a big one. Also, just because they’re mentioning evidence in a call doesn’t mean that they’re doing that very effectively.
[00:19:14] Emily Coleman: They’re mentioning it, but how they’re bringing it up may not be awesome. They might just be using the stuff that we have in the first call deck. And so it’s good to really get very aggressive with how you’re asking questions in these LLMs. The other thing that I’m going to bring up too, is especially if you’re using with the web or desktop app of Claude, Gemini, ChatGPT, you’re going to deal with something called context rot. And the way that these LLMs work is that they can only hold a certain number of tokens, which is little snippets, little pieces of words, pieces, fragments. And after a while, it can’t keep track of it all. It’s very similar to our brain. If I just kept talking and talking and talking and gave you tons of information, you would kind of forget what I said at the beginning or lose it in the middle.
[00:20:09] Emily Coleman: And anyone who’s ever worked with ChatGPT has experienced this where you get maybe five, 10 minutes into a conversation and then it starts to get worse and worse and worse. And you’re like, “What is happening? Where are you going?” That’s context rot. And context rot is going to give you some … You have to be aware of it because if you aren’t, you’re going to get some results that you’re like, “Man, this looks great.” And then you go back and you run it again and ChatGPT is going to be like, “I don’t know where you got that from. That’s crazy. This is not in the data.” So you have to be really diligent about you starting a new window, which is why one of the first things you need to do, regardless of if you’re using a spreadsheet or an LLM, you’ve got to have one big spreadsheet that you’re pulling from.
[00:20:59] Emily Coleman: And when you ask a new question, you’ve got to start a new window. And this is a problem that you just need to be aware of, which is why I kind of recommend, if you can, going to the next level, which I’ll go into here in just a second. But before I do that, Jillian, do you have any questions?
[00:21:16] Jillian Hoefer: Yeah. So what I was going to ask is basically what you just talked about is you have to open a new chat with this CSV. So then I would assume what you do at the beginning of that new chat, Emily, is you just give it that CSV, you upload it as the first thing every time and then give it context for what question you’re asking. Is that correct? Or is there any other historical knowledge that you need to pull over from other chats that you’ve done in the past?
[00:21:38] Emily Coleman: No, I mean, that’s pretty much correct. You can give it some information. You can ask it. What I do a lot is at LaunchDarkly, we have OpenAI models. We also have Gemini models and Claude models. So I might take some of the information that ChatGPT told me and I’ll give it to Gemini along with the spreadsheet and say, “Hey, how valid is this claim?” And let Gemini run it and answer that question, kind of pit them against each other as much as possible. That’s one of the best ways I can tell you to do that, to avoid having really insane hallucinations. The other thing you can do is that I found that NotebookLM is a little bit better at doing this. It’s Gemini’s tool where you can basically upload a bunch of different files in there and then you can start asking a lot of questions.
[00:22:34] Emily Coleman: And NotebookLM is probably where I would go if you wanted to save something and then just kind of continue to ask it questions because it will refer back to it a little bit better. So that would be my recommendation there, but it’s something to be aware of because it can get you off all the wrong track.
[00:22:53] Jillian Hoefer: When you said, what is it? Context rot, is that what it is? I felt so validated. I was like, “Oh, there’s an actual name for it. I’m not just
[00:23:02] Emily Coleman: Crazy.” That’s a real term in the AI world and it happens to everybody, including people who are coding. Some of the newer models have, they claim that they have about a million tokens of context, but the studies are showing that they don’t really. They still tend to lose the plot. And so managing your context window is really important. If you didn’t know, now you know.
[00:23:27] Jillian Hoefer: Now we know. We’ve learned many, many things already today that concluded. I think when Tanner, in our comments, I think just gave us a very good transition point. He asked, would using Claude Code solve for that issue?
[00:23:38] Emily Coleman: Yeah. So Claude Code does solve for this issue. So I’m glad that you asked. So AI coding tools, Claude Code specifically, and I would call out Claude Code not in the web app version. Claude Code that you run in your terminal or your IDE. And IDE is just like a fancy term for a program. There’s a lot of different ones out there. Google has a great one called Anti-Grain that’s free. VS Code is another very popular one. These are programs that programmers use when they are using Claude Code. You don’t have to be a programmer to use an IDE. You don’t have to be a programmer to use Claude Code, but what Claude Code, or you don’t need Claude Code. You can do it with Codex. So OpenAI has Codex, which is functionally the same. Gemini has a CLI. All of these are functionally the same thing where it is at its core kind of a coding platform.
[00:24:42] Emily Coleman: But the really nice thing about being able to do this is you can set up projects. And what I found to be even better than running a spreadsheet is that you can use Claude Code or Codex to create a database. And a database is better than a spreadsheet. And so there’s lots of very lightweight database, like self-hosted database options. SQLite is one of them. DuckDB is one of them. This is going to sound really intimidating and scary to think about building a database, but Claude Code can help you build it. And the nice thing about a database is that you can run SQL queries. And I know enough about SQL queries to know what they are. I’m not going to pretend that I know how to write a SQL query. I am not a trained expert in coding SQL queries, but you can get those SQL queries from Claude Code.
[00:25:37] Emily Coleman: It will run them. There’s also a lot of really great extensions and really great skills within Claude Code and Codex that will help you build really beautiful dashboards. It will help you build those data visualizations. And if you have your database built, then you can really start to ask it very cool questions. You can bring in a whole bunch more different points of data. And if you really want to get more sophisticated with it, you can plug in either using MCPs or APIs or some of these other things to have a refreshed set of data. So you can run that pipeline more often and you’re not having to start from scratch every time.
[00:26:21] Emily Coleman: So using Claude Code, this is what I found. Again, two times more calls from our Gong tracker with reps who are really highly engaged with our content. I put a nice little P value here, which if anybody failed statistics like me, that’s also like the probability that something is happening by random chance or because of what you just mentioned. And so I was able to calculate the P value for Highspot views, do not independently predict that revenue, that win rates are higher after controlling for tenure and some of these other things. So if you don’t know what a P value is, you can ask Claude, you can ask Gemini. We did find some interesting, I would say, very early data after running this data set again that we are seeing maybe a 47% higher win rate for new business deals where customer evidence was mentioned on calls.
[00:27:17] Emily Coleman: However, this doesn’t quite reach statistical significance for us. My tracker is only tracking calls that happened since November, so it’s something that I can kind of continue to run. And at some point when maybe that becomes a little bit more statistically significant or my tracker gets a little better, that could be a really interesting proof point.
[00:27:38] Emily Coleman: And I’m hoping to have some leave behinds for the group on how I built this in Claude Code specifically, like what prompts did I use, which tools did I use? Why did I use DuckDB over SQLite? How do you even get Claude Code to build you a DuckDB file or a SQLite file? I would love to share the full dashboard I have, but it has a lot of proprietary data in there. So I might be able to provide a more anonymized version for you to look at, but it can do some very, very cool things and package them in a really neat way. The difference between what I was doing with ChatGPT and Gemini versus what I’m doing with Claude Code, I kind of can’t emphasize enough that it’s like world’s different. It’s so much better. The look is better, it’s faster. I feel a lot more confident in the data that I’m extracting from this database.
[00:28:38] Emily Coleman: It’s well worth digging into if you haven’t tried Claude Code yet, if you haven’t dipped your toe in. There’s a lot of very, very cool things that you can do with it. So it’s well worth, it’s well worth exploring.
[00:28:52] Emily Coleman: Any questions about the Claude Code stuff?
[00:28:57] Jillian Hoefer: I’d be shocked if we don’t have any questions that come through by the end on digging into some of this stuff, but I think we’re good to keep going for now,
[00:29:05] Emily Coleman: Emily.
[00:29:06] Emily Coleman: Cool. So yeah, again, kind of to go back to where we started, start with a really simple data set and continue to build. I think it’s really easy to get very excited about what’s going on with LLMs and to ask it a lot of questions. And again, if you’re working within the web or desktop apps of these models, you might find that you’re getting a little frustrated with either hallucinations or the context rot or whatnot. So start really small, start with the spreadsheet where you can, see if you can join those things together and then continue to build that up and think about how can I do this better? But really the thing is, it’s more about that philosophy. So what’s the question you want to answer? Where could you find that data? How do I combine it? And then how do I analyze it?
[00:30:00] Emily Coleman: And then if you are in a situation where maybe you do have Claude Code, it’s working pretty well, but you want to bring it into some of your other data tools. The other nice thing about doing it this way is that if you can validate that you can get some type of number or an output from it, you may have an easier time going to your data team to ask them to build you something that involves these APIs or other data connections that you might not have access to because you’ve already solved for it at least once. So again, the three things that I want to leave you with here is start with a question rather than trying to fit a dashboard. A lot of these numbers are things that I wouldn’t see in a dashboard. They’re not a dashboard that would be available in Highspot because I’m bringing in this Gong tracker data and bringing into Salesforce data, I’m bringing in Highspot data.
[00:30:51] Emily Coleman: So use the tools that you already have that you can get a lot further than you think you just need to be really curious about the questions that you’re asking. And then be honest about trend-based reporting.
[00:31:06] Emily Coleman: I’m not going to represent to my leadership that this is exactly what’s happening. I can be very confident this is happening. But we do see, I mean, the numbers are really clear. No matter who I ran it through, whether I did it with a CSV or in Google Sheets or with ChatGPT or Gemini, we did see that across the board, reps who were looking at our evidence library were mentioning more calls or mentioning customer evidence in more calls. And the other really interesting thing that we found is that it wasn’t just the volume that they were looking at. It was, were they looking at unique assets? So the more unique assets they were looking at, the more likely they were to bring up customer evidence in a call. And I think that that in and of itself is a really interesting and cool finding and something that I think would really surprise my leadership.
[00:32:03] Emily Coleman: And they were very excited about that information. It gives me a lot more leverage, I would say to say we want to do more direct trainings with reps on how to use this library because we know that if they know it exists and if we know that they know how to use it, they are going to mention customer evidence in more calls. And hopefully that’s going to help me then begin to prove that if they mention evidence more often in calls, that that is going to lead to better deal outcomes for them. And that really gets everybody bought into what you’re building.
[00:32:35] Jillian Hoefer: Thank you so much for your time today, Emily. It was so incredible having you and you are just brilliant and we’re lucky to be in
[00:32:42] Emily Coleman: Basket
[00:32:43] Jillian Hoefer: Of your presence.
[00:32:45] Emily Coleman: Thanks for having me. Yeah, feel free to connect with me on LinkedIn and ask any questions. I’m always happy to talk shop.
[00:32:52] Jillian Hoefer: Awesome. Okay. Well, thank you all so much and we hope to see you back here in two weeks at The Outpost. Yeehaw, have a good one, everyone. Thanks for joining us at The Outpost. If you enjoyed today’s session, mosey on over to userevidence.com/outpost, where your peers are sharing their very best customer marketing and advocacy plays every other week. And to learn more about how UserEvidence helps B2B marketing teams manage evidence, advocates, and references all in one place, check out userevidence.com.