How do product teams decide what to build and what not to? Each episode explores how you can use experimentation, A/B testing, and evidence-based decision-making to ship better features, reduce risk, and drive measurable business impact. Hosted by Ashley Stirrup, CMO at GrowthBook, the show goes beyond theory to unpack real decisions, real experiments, and real outcomes so you can learn how modern product organizations turn hypotheses into results.
Ashley Stirrup (00:33)
Hello and welcome to today's show. Today we have Nafis Shaikh who is the Director of Product Management at Chess.com. Nafis, welcome to the show.
Nafis Shaikh (00:42)
Ashley, thank you for having me. I'm excited to talk about experimentation.
Ashley Stirrup (00:46)
Yeah, I'm excited too and particularly glad to have your perspective on experimentation from the product manager's point of view. Maybe to kick things off, maybe you could tell us a little bit about Chess.com, the business a bit, and then kind of where it sits in its experimentation journey.
Nafis Shaikh (01:06)
So Chess.com has been around for many years now. It's almost a 20 year company. I believe we had our 20th anniversary last year. Experimentation at Chess has been a relatively new concept. So I think we first ran our first test was ran in 2023. So much later than when the company started. This company has been built off of passion and
just people who love Chess who want to build for other Chess players. And so the need for experimentation over the years has been somewhat less needed because, we're just building all of the things that users want to have because, well, we're the users ourselves. Fast forward to, again, 23, we've seen the world where we probably need to start investing a little bit more into a better understanding of the different users.
Our population as a product has skyrocketed over the course of the last five years, right from the beginning of COVID onwards, we've seen massive waves of growth. And so with this incoming population, well, the dynamics of who our users actually are have shifted a lot, right? We have users who have never played Chess before, who don't know how to move the pieces all the way to, I'm a rated FIDE player, I have a rating, I play competitively. And so how do we balance?
know, the breadth of our product that suits a varying different style of users who have an incredible ⁓ variety in skill level across them. So experimentation is something that we have had to use to make sure we're building the right things for our users. We're not really alienating many parts of the population for either side. know, last year we...
Had a pretty successful year. ran about 400 tests, which is pretty impressive from where we've come from. And this year we've actually set a goal of running a thousand tests, which we're certainly on our way. In Q1 of this year, we've run 195 tests. our goal is to move incredibly fast, right? We want to test out as many different ideas as possible. We want to ship fast, we want to learn fast, and we want to repeat the cycle over and over and over again because...
Ultimately, that's what's going to help us grow as a product.
Ashley Stirrup (03:29)
Thanks for that. That was a great introduction. you have a long history of experimentation at companies like Zynga and Prodigy. When you joined ⁓ Chess.com, did you feel like you were kind of an evangelist for ramping up the experimentation you're doing there?
Nafis Shaikh (03:36)
Mm-hmm.
Yeah, absolutely. When I first joined, we were well on our way for testing. It wasn't a new concept by any stretch of the imagination. But the rigor that goes along with testing is something that we're probably still at a very basic level. The goal was simple A-B test. Is this better than that? And I think that certainly served its purpose for a long time. But now we're in this
position where, hey, let's think really deeply about how we're deeply connecting back to user problems and how do we construct hypotheses in a way that allow us to answer exactly why the things that we're building actually work. So we're going from these rudimentary tests to more evolved, more better constructed.
⁓ And allow us ultimately to generate better insights about our users, right? Our goal is to deeply understand the psychology of user behavior and understand why users might do the things that we do. And so we put them in these sort of controlled environments to best understand if we did this, what ultimately happens? And if that happens, what are the side effects of that? What are the weird things that happen in scenarios that you would never even think of, right? So ⁓ our
So complexity and depth of understanding of our tests is something that we've really tried to improve and just get better at, right? Because it's one thing to run a test, it's another thing to collect as much information as possible about what that test actually says.
Ashley Stirrup (05:17)
Yeah.
Yeah, I love it when our guests talk about how you try to go from just, A better than B, to how do we learn about our customers? And I can't help but thinking as you were telling that story of really basic people who don't even know how to move a Chess piece, which by the way, I'm a huge fan of Chess myself, so this is a topic near and dear to my heart, to the very advanced. yeah, boy, in order to design experiments
help you learn, you must really have to think through like who is my user and how do I design a test for those different types of users. Is that true?
Nafis Shaikh (05:54)
Yeah,
yeah, absolutely. Chess as a company has always been build the best Chess product, right? That's our goal. Our number one goal is to make all of our users happy and build products that they ultimately want to use deeply. And so that is always at the heart of whatever we build. That being said, like I said, our user groups have shifted a lot. We have many different types of users that have entered.
the ⁓ ecosystem where they actually have different desires and they have different needs and different wants. And so how do you best understand how a feature that's built for everybody evolves and shifts and is different for different players, right? So, you know, we have many different features like puzzles and bot, our bot system, we have a whole coach feature. Well, how the coach talks to you when you're a rated player is very different than how it should maybe talk to you with a new player, right?
you can get into really advanced concepts, being a really, you know, high rated player to a new player. Those things don't mean anything. In fact, it actually makes it worse. It's like we're throwing along difficult concepts, actually confuses users more than it does. So how do we understand where that line is? How do I understand for what the sort of cadence of speak and the right vocabulary is best for users?
Ashley Stirrup (06:58)
Yeah.
Nafis Shaikh (07:17)
on the lower rating scale and how do I make sure that it scales properly for users on the other end of the spectrum? And so like these are all the questions that we have to ask because the reality is a one size fits all doesn't actually end up fitting anybody.
Ashley Stirrup (07:31)
Right, right. And as a product manager, do you have certain metrics that you're held accountable to from the business side?
Nafis Shaikh (07:39)
Yeah, absolutely. I generally think of metrics across sort of four dimensions, right? So you have your inflows, right? So how effective are you at acquiring users into your product? You have your sort of engagement bucket. How well do they do the things that you do? So for us, it's like, do they, are they actually playing Chess? Are they participating in puzzles, right? And then the next bucket is like, after they engage, how likely are they to return to your product over?
a set period of time. Usually, you measure day one, day seven, day 30. We have a whole sort of WAU model which focuses on sort of different segments of the population, right? So you have your new users, your current users, returning users. They all have different sort of needs and wants. We try to measure their retention rates separately. And then of course, we care deeply about our monetization, right? So how likely is somebody to start a free trial with us?
how likely are they to end up paying for our subscription product as well.
Ashley Stirrup (08:38)
Yeah, and so when it comes to experimentation, is monetization your North Star or is it more about engagement? How do you decide what's the most important metric to optimize?
Nafis Shaikh (08:49)
Yeah, I think we're spread across different functions here. So it's different on a per-or-or extreme basis. We have a monetization group whose sole purpose is to figure out how to improve our monetization metrics. But then the other parts of our product are like, just make a good Chess game, right? Like we have a team dedicated to gameplay, which is like every single part of the Chess experience, make it better, right? From how the pieces move to...
the cadence of play, the push notifications associated with it, the ⁓ UI attached to it, how effective the feedback from the coaches in any scenario. And so we care deeply about the experience and we focus a lot on that because ultimately we know that if we do a great job on the experience, people will engage more deeply, they'll retain better and they'll monetize.
Every product I've ever worked on sort of operates in that chain, right? It's rare that you just get to monetize immediately. You actually have to give people a really solid product that they find value in. They'll come back and use the product more often. And when that sort of tipping point hits, yeah, well, more likely to pay for your product because now I've found the value in it.
Ashley Stirrup (09:41)
Yeah.
Yeah. And so what's been the driver for this desire to really ramp up the number of experiments you're running per year?
Nafis Shaikh (10:10)
Yeah, I think we have a really effective team. We have a lot of different focus areas. Experimentation serves many different sort of purposes. One is it's actually a measure of how effective we are at just getting through work. Our goal is to ship. We don't want to waste too much time simply just ⁓ mulling over data and talking about different feature ideas. We actually want to go and do them. We actually want to go build them, learn.
Ashley Stirrup (10:28)
Mm-hmm.
Nafis Shaikh (10:38)
rinse, repeat it, and our experimentation, like we have our goal for a thousand experiments, that is actually part of that goal is actually to determine how effective we are at just getting through work. So kind of one sort of corner of this is just the number of experiments we run is a measure of how effective we are as a group just getting features out. Secondly is like our
Ashley Stirrup (11:02)
Yeah.
Nafis Shaikh (11:03)
desire to understand our users more deeply ⁓ beyond just like, this experiment generated this results. We had this KPI lift on this particular metric. Great. All that stuff's awesome and we need those and we need that to evaluate our tests. But we often come to the sort of position of like, so what, right? This move, what does that actually say about a ⁓ user's psychology at that point in time? Do we, you know, does it tell us that they're like deeply invested in this part of the product?
Does it tell us that, this speech that the coach said was really sort of inspiring? And we try to go beyond just the basic KPI results into tell me more about what this says about the users.
Ashley Stirrup (11:48)
Yeah, yeah, that's really important. You've mentioned the AI tutoring quite a bit. So I assume that's an LLM-powered chatbot tutor type of thing. When it comes to A-B testing, like that, are you measuring different metrics than you would be with a traditional A-B test?
Nafis Shaikh (11:59)
Mm-hmm. Yeah.
Yeah, this one's actually really difficult to test, right? Because this is a pretty, it's a very large feature, right? So it's called our coach system. You can go play a game against the coach and the coach will guide you while you're playing. They'll give you tips and tricks on sort of how to proceed and they'll tell you, yeah, this was a good move or this is why this was maybe not the best move. And we have a speech element to that. the coach vocalizes what they say.
It's actually really difficult to measure if a user actually has their volume on, right? ⁓ We can control that at the app level, but at the OS level, they can have their volume down and we have no idea if they're listening to it. And so we're trying to figure out better ways to measure the efficacy of the coaching itself across that specific body of work. We know that...
Ashley Stirrup (12:43)
Mmm.
Nafis Shaikh (13:00)
If people come back and play the coach, that's obviously a really great thing, right? So if they're finding value in the system, the coaches providing good ⁓ insights to how they're playing and helping them get better. Theoretically, what we should see is, well, your ratings should improve because you're getting better coaching. And then two is you're coming back to the system to get better, to get even better at the game than maybe you were previously. so it's, yeah, it's.
It's a hard one to test, but we're sort of working on ways to better understand that. One of the ideas that we had recently is like the coach is animated and so it's most starts moving. so like, how do you, again, how do you tell if somebody knows or how do we tell if the user can actually hear that? Like, okay, well, maybe we can put like maybe some sound ⁓ UI coming out of the coaches.
mouth if maybe they can click it and maybe they can get a better understanding of whether or not they're actually hearing the audio if they give us the indication that they clicked on it. So again, hard one to test, but at its core, if people are coming back to the system that tells us a lot.
Ashley Stirrup (14:07)
Yeah.
Yeah, yeah, I was talking to one of our customers Khan Academy and they recently ⁓ published a study where they were trying to figure out, how do they look at the questions their AI power tutor was asking?
to determine whether is this an engaged learner or an active learner or a passive learner so that they could try to make that tutor a lot smarter. So they were actually using AI to analyze the prompts and then score them. And so they basically created their own metric around, you know, engaged learning and okay, if we use this prompt, how much engaged learning do we have versus this other prompt?
Nafis Shaikh (14:45)
Okay.
Ashley Stirrup (14:56)
And so it sounds like you have similar types of challenges and maybe there's opportunities to create similar types of metrics.
Nafis Shaikh (15:03)
Yeah, right. comes back to, I guess the way I see it is like, it comes back to feedback, right? It's like, can we get the users to tell us that they're invested in this? And for a system like this, the heart of it is, you come back? That's enough for us.
Ashley Stirrup (15:09)
Yeah.
Yeah.
Yeah, right, right.
And how would you say Chess.com is doing about kind of embracing humility and, you know, expecting 80 % or 90 % of experiments to fail?
Nafis Shaikh (15:34)
Yeah, pretty well, truthfully. I think we've gone from a world where everything is true to, you know what? This may not necessarily be true. And again, our population shift is a real consideration for us. The population that played this game 10 years ago is not the same one that plays today. And so again, we know that not everything works for everybody. And in that world, I...
Ashley Stirrup (15:58)
Yeah.
Nafis Shaikh (16:00)
be wrong, right? It could be, you know, if I'm building for one end of the spectrum, it could be wrong. And I think everyone's generally on board with, you know, and humbled enough to sort of not push too hard on this because, yeah, we could easily be wrong. We don't know what the sort of ancillary outcomes of a test could be, right? Great for one set of users, maybe terrible for another, right? And so we try to validate that as deeply as we can.
Ashley Stirrup (16:23)
Yeah.
Yeah, because I feel like to me that's one of the big unlocks for a company as they go from kind of less mature and experimentation to more mature is once people embrace the we're not as smart as we thought we were, it turns into I want information, I want data on everything we're doing. And so I would imagine you coming in with your background, your past experience, particularly at Zynga, the
you know, you would have been a champion for that kind of humility and helping more people kind of embrace that. Because I would imagine also at a company like Chess.com, everybody's like, we know, we know what makes a good app, right? Because they're all players.
Nafis Shaikh (17:04)
Yeah, look, this doesn't happen without the support of our leadership team, right? And, you know, our CEOs have been great at building this product and they are deeply invested in the ecosystem and they understand at a very deep and intimate level what works for Chess players, right? And so for them, you know, giving us the space to do this and help us validate these ideas, I imagine was probably not the easiest thing in the world to do, but I mean, they fully accept the fact that, yeah, they're not.
not 100 % right all the time. you know, when we, when we test, actually learn deeply about the truths of our product, right? Because at the end of the day, that's what's most important. It's like, you know, we're on the search for truth and experimentation is the best way to find those truths.
Ashley Stirrup (17:45)
Yeah.
Yeah, yeah. And what I've heard is that at a number of companies, once they started moving to LLM powered features, that's when a switch made, because people knew they couldn't predict what the experience was going to be like there, especially when you've got a wide variety of different types of users.
Nafis Shaikh (18:07)
Yeah, we haven't fully moved to AI-generated experiments yet. We have full desire to move in that world. One of our sort of mandates moving forward is like, we want to have a conversation in Slack about a particular feature, and we want to build out all of its requirements through conversation and hand it off to an agent. An agent will go build the feature.
go set up the experiment, run the experiment, and then report back to that same Slack thread about the results of that test. That's the world that we're moving into, and that's the world that we're certainly trying to build our infrastructure around. We're not quite there just yet, but it's a place for us to really push towards.
Ashley Stirrup (18:48)
Yes.
Yeah, that's definitely the dream. And to me, there's kind of two categories. There's one of kind of AI powered experimentation. You're using AI to code. You use AI to insert a feature flag and then create the experiment and analyze it and all that. The other thing is testing.
you know, an AI powered tutor or a chatbot or whatever it is that's actually, you know, powered by an LLM. Because then you get into the whole, which model should I use? And do I want a faster experience or a smarter experience? And you know, all those different trade-offs and which system prompts better. And so that's kind of a whole nother category where you, you know, your need to do experimentation is even greater. And there's probably new metrics you need and a new strategy to really understand success. Yeah.
Nafis Shaikh (19:19)
Sure.
Yeah,
absolutely. Because we're not quite there just yet. It's certainly part of our future. It's certainly part of where we want to get to. We're right now really trying to do experiments at a really high level.
Ashley Stirrup (19:42)
Yeah.
Nafis Shaikh (19:53)
but also do them fast, right? And that's often a trade off of like, well, if you want it really deep insights and a really well run test, well, you have to slow it down into, no, we actually want both. We want fast shipping, fast learning and rinse and repeat. So ⁓ it's on a trajectory where we're certainly moving in that direction, but not quite yet.
Ashley Stirrup (19:54)
Yeah.
Yeah.
Yeah.
Right. Is there an example of an experiment where you and the team were surprised by the results?
Nafis Shaikh (20:23)
Yeah, so in Chess we have this feature called game review. And game review is basically a post-game analysis of how you did, right? So the coach will take you through each individual move and say, yeah, this was a good move, or hey, you know what, you should try something different. So at the point of the game over, we sort of give you this modal that used to present all of your misses, right? So your blunders and your misses in the game.
And we always thought like, yeah, this is game review. Most people want to do game review at the point of a loss because, well, you want to work on the things that you didn't do quite well. So if we did a whole bunch of analysis, it turns out like 80 % of users run our game review on wins, right? So our hypothesis is completely off. people just want to feel good about themselves. They don't necessarily want to like focus in on the things that they've done poorly. And so what we did was we flipped the...
sort of the things that you did poorly to the things that you actually did positively, right? These are all the good moves that you made as an entry point into our game review. Just kind of like a nothing hypothesis. I didn't really think anything was gonna happen of it. We actually saw incredible results for this. So we saw like a 25 % increase in people starting a game review. And our game review is sort of a freemium option. So you get one free a day, everything after that, you have to be part of our...
paid subscription. So we saw a huge uptake in people starting a game review and then we also saw a huge uptake in people subscribing to our product as a result simply because we just shifted from a negative to a positive and you know I'm somewhat dumbfounded on this because there's this lesson across all the different games that I've worked on over the years. It's like people just want to feel good, focus on the things that make people feel better about themselves. You know the world's a hard place and people have difficult lives and
You know, when you come to play a game that's supposed to be enjoyable, focus on the things that are enjoyable that make you feel good about the work that I do in the game.
Ashley Stirrup (22:25)
Yeah, that's a great story. So out of curiosity, I would imagine there's a segment of your user base that really wants to do that on the losses. But I'm sure you just make that available to those folks as well. Maybe they have to go look a little harder or something. Yeah.
Nafis Shaikh (22:38)
Yeah.
Yeah. Well, the feature doesn't change. It's just how we position the entry point, right? So instead of focusing on the negatives, we just focus on the positives. You can still go into the feature just the same way.
Ashley Stirrup (22:45)
Right. I see. Got it.
Got it. And then you can kind of figure out where to go down your path. Because I know I'm pretty competitive. So I want to know, what did I do wrong? When did I first make the first mistake that then led to the bigger mistakes type of thing? yeah. That's right. That's right. It comes from being competitive. Yeah. Well, terrific. So.
Nafis Shaikh (22:54)
Exactly. Yeah, exactly.
Yeah, well, you're part of the 20%, Ben.
There you go.
Ashley Stirrup (23:13)
If you had some examples of, like let's say you were talking to somebody who was brand new in a product management role and they wanted to bring experimentation into an organization, how would you recommend that they kind of evangelize that internally?
Nafis Shaikh (23:30)
Yeah, this one's, you know, I think like everything, showing value first, doing is the best way to communicate anything, right? We could talk about the ins and outs of experiments and try to explain to people, you know, you should do tests this way and we should set up an infrastructure and all of these things, but nothing works better than having something tangible that you can do yourself that...
you can show back to whoever the decision makers within your organization, right? So, you know, we talk about testing and these sort of, you know, fancy sets of tools and these fancy sets of, you know, products that help us analyze data and all these things, but it doesn't necessarily need to start there. You could start at a much super simple space, right? It's like you have, hey, you know what? I think if we change this.
this will happen. You could do pre post-tests without needing any experimentation tooling at all, right? Assuming you picked the right thing to test on. like, well, previously users behaved this way the day we launched this test, this happened. Or the day we launched this particular feature, this happened. Let's compare two weeks prior to the test starting and two weeks after. Again, it's not the cleanest test in the world, but it at least allows us to start talking about, hey, this is the difference.
that this particular feature made, right? And so how do you communicate that to a set of people, right? We, you know, oftentimes have seen wild experiments, results all across, even that going back from Zynga to Prodigy to Chess, like some people will write a one line statements, like, yeah, this does to improve retention. great. Well, take your time to actually write out what happened in the Chess. We launched this feature on so-and-so date.
You know, we observed the key moments here. We saw this percentage lift in this part of the funnel that followed all the way through to maybe another success point. What we saw with the users who participated in this part of the test, that's, their KPIs improved. This tells us that something tangible about how they behave, right? You can start in a very simple fashion and just simply start talking about how your work is actually improving.
the overall, right? I did this set of work. How do I connect it back to our KPIs that matter ultimately? And then once you can start showing that value and like, okay, well, great. Hey, you know what? We can actually do these tests way better. We can start like going even deeper in this understanding. We can control for all of the potential happenings in our product at any given time. you know, we're talking about a really simple state. You could do it with simple prepost test as well.
Ashley Stirrup (25:52)
Yeah.
Yeah, I think that's a really good point because, you know, it's almost like you're just starting people down the journey of becoming more more data driven. And ⁓ hopefully it becomes a bit like a drug where it's like they just want more and more of it once they start to get it, especially if it becomes like a regular thing. Yeah.
Nafis Shaikh (26:22)
Absolutely.
Yeah, I mean, if you could attach
top line KPI growth to the things that you're doing, chances are you're already that good spot, right? And that's the hardest part for a lot of organizations is like, we did this work. Okay, well, what happened? Like, can we attribute everything that happened back to this specific thing? Maybe, if you're an organization that ships one thing at a time, okay, maybe you can do that. But the reality is we wanna do lots of things. We wanna ship lots of different products. How do we?
Ashley Stirrup (26:38)
Yes.
Nafis Shaikh (26:59)
attribute value back to the things specifically that we've built in, you released to our users.
Ashley Stirrup (27:06)
Yeah, that's really when experimentation becomes so important because you can control for everything else, at least in a piece of the supply chain, right? Maybe not the whole thing end to end.
Nafis Shaikh (27:14)
Yeah,
yeah, you we we are Chess is a huge product, right? We have an incredible amount of functionality when their product we have a 10 million person daily active user base. There's a lot going on in this product for us to control for one small thing in the breadth of everything that's going on would be near impossible. And so we have to go down this experimentation path. It's the cleanest way for us to do it to connect back to KPI growth, but
but not everybody is the size of Chess.com, not everybody has as many sort of bits of functionality as we do. If you're starting experimentation, you can start simply. It doesn't need to be super intense.
Ashley Stirrup (27:56)
Yeah. Now we had another guest on who talked about how each time he introduced a new feature in the onboarding process, his users didn't care about it. ⁓ It was like very consistent. Launch it, it wouldn't work. But if he was passionate that this feature should work, then he would just iterate on it and iterate on it. And that's really how he learned about his users and what did they really care about, what language mattered, all of
so that kind of became his MO over time. Do you have any similar examples like that at Chess?
Nafis Shaikh (28:30)
mean, the onboarding problem is one for us, ⁓ regularly. We have this debate in our, organization. haven't made it more debate for me, it's more something that is for other people. I'm of the opinion that the more things that you've put into your onboarding, chat, the worst off it's going to be, right? So, or your onboarding completion rates are actually going to decline the more that you put in there. Whereas.
Ashley Stirrup (28:48)
Right.
Nafis Shaikh (28:54)
Other parts of our organization believe that if we make sure it's focused and specific to making sure users are getting from one point to another and that we're providing value across the onboarding, well, that's an okay scenario for us to have many steps of the onboarding. I've run a maybe it's been an exaggeration, I've run many onboarding tests in my time and the one thing that's been consistent is, yeah, you know.
you want your onboarding to actually be relatively short. You want to get people to the core function as fast as possible and for them to find the value of your, the core of your product as quickly as possible. And chances are if you're able to do that, you know, they're going to be invested. You know, if we pull away from that and start adding, sort of, well, which tiles do you, or which themes do you want, or which Chess pieces do you want, and set up your profile like,
These are all kind of good things, but I think it ultimately takes you away from... I don't know if I answered your question, but that's what that...
Ashley Stirrup (29:56)
No, no, I definitely think you did.
Yeah, yeah, and that's a that's a great thing. The good news is you can test that. So, yeah.
Nafis Shaikh (30:03)
That's exactly what we're doing.
Our onboarding is actually one of the places that we're running the most amount of tests. We have the most currently running tests. We so many different hypotheses about just, again, based on what skill you select when you enter the game, what should be your ⁓ experience of the product. If you choose on my new user,
I know how to move the pieces and I don't know how to move the pieces. Well, those are two distinct types of users. They probably require different experiences. So we're trying to deeply hone the experience based on what users tell us they are. And based on that, like, well, what should the first thing you do be? If you don't know how to move the pieces, should we send you to the coach or should we send you to our lessons product that will teach you how to play the game of Chess? Or is there something even different than that? We're trying to deeply understand that space.
Ashley Stirrup (30:39)
Yeah.
Nafis Shaikh (30:55)
Make sure we're connecting our users to the right experience in session zero.
Ashley Stirrup (30:58)
Yeah.
Yeah, that makes a ton of sense. That basically sounds like the entry point to a whole personalization strategy.
Nafis Shaikh (31:07)
This is exactly what we're running here.
Ashley Stirrup (31:10)
Yeah, yeah. Well, thank you so much for joining us today. We're out of time, but I want to thank you, Navas, for being ⁓ a great guest and sharing lots of great wisdom with us today.
Nafis Shaikh (31:20)
Thank you, Ashley. Appreciate the time. Alrighty.
Ashley Stirrup (31:23)
Thank you.