Testing your ideas against reality can be challenging. Not everything will go as planned. It’s about keeping an open mind, having a clear hypothesis and running multiple tests to see if you have enough directional evidence to keep going.
This is the How I Tested That Podcast, where David J Bland connects with entrepreneurs and innovators who had the courage to test their ideas with real people, in the market, with sometimes surprising results.
Join us as we explore the ups and downs of experimentation… together.
David J Bland (0:1.205)
Welcome to the podcast, Victoria.
Victoria (0:3.376)
Great to be here, thanks for having me.
David J Bland (0:5.248)
Yeah, I was really excited to have you on. I remember, you know, I spent a lot of time on LinkedIn, probably too much. And I was promoting sort of the podcast and I was looking for more really interesting stories. And I saw what you were up to with Wonder. And I thought, wow, I really need to get her on here to talk about some of the details because it's not very often that we have somebody talk about how they're using these principles to help entrepreneurs who are also trying to navigate all this. So I just want to thank you so much for joining us.
Victoria (0:33.956)
Yeah, it's been an interesting vantage point to be where I am now, having been...
at consultancies where you're helping these big companies, which is still very much our focus here as well, but then more the gamut of smaller companies, even solopreneurs, and trying to figure out how can we still serve all of these people and get them information they need and help them innovate the way they're trying to, but with a little bit of the academic perspective that we tend to have being one word. We're serving clients, so it's been an interesting journey.
David J Bland (1:4.275)
Yeah, so maybe you can give our listeners a little background on yourself and kind of like what led you to where you're at now with Wonder.
Victoria (1:11.856)
Sure, well started my career at Kantar. So I mentioned kind of a big firm working with big clients. was on effectively a brand strategy consulting team there, Kantar Consulting. We worked with the research engine of Kantar, but the recommendations and kind of the deliverables took it a few steps further than just here's your data set or here's how you're doing against benchmarks. It was sort of so what, now what, and how might you think about evolving your brand strategy or your positioning or figuring out.
how to serve a different white space, you know, kind of bridging where research would end and where strategy consulting might pick up without that whole depth of insights and, you know, research and data behind it. From there went to morning consult was kind of similar still in the primary research space, but thought a lot more about what wasn't meeting people's needs when it came to primary research, having come from basically a direct competitor and now basically a disruptor to figure out how can morning consult more agilely and
creatively and differently serve more of these audiences. And to your point, you you go from these really big Fortune 500 companies who have multi -million, sometimes more, in their research budgets to the like Airbnbs in the world before their Airbnb, who still need research and insights, but they can't afford the six months and the six figures that some of these big, you know, institutional firms tend to cost. Morning Consult was trying to make it a bit more accessible. And then you know, since then, that was 2020. It's been insane to watch the
of this long tail of all these different tech solutions, now AI solutions, helping to make data research insights, all these feedback loops way more accessible for all sizes of companies. Did a quick stint at a B2B, effectively B2B SaaS company building a subscription content product. And then here at Wonder, we're on the desk research side. So this problem space being we as professionals, whether you're a solopreneur or owning a small business or, you
sitting in a big team at Unilever, you're probably doing a lot of Googling or now spending a lot of time on LLMs, but having a cross -reference across LLMs and then check your sources and then make sure this is from 2024 and not 2020 or whatever the latest cutoff window was. So our whole premise historically has been just come to us, ask the question, we'll go do a scan of everything that's publicly available out there for you and save you all the tabs and the time. April, March, 2023, AI comes onto the scene and we sort
Victoria (3:41.550)
face this moment of the world that we've been living in and the solution that we've been offering, like is this the end of it or do we not need humans anymore? Are we just gonna all go either the way of the dodo or are we gonna have to totally rethink what we offer? So even in that, know, something that we witnessed and we're supporting our clients do, which is bring themselves into this post -AI world, we had to do for ourselves as kind of things were unfolding in real time. So that's kind of the journey, but right now we're still in the squarely in the desk research space, have nicely incorporated
Yeah into our solution and still very much believe in the power and the differentiation of humans
David J Bland (4:16.962)
That's a great journey. think the only way I keep my sanity from working with big companies and startups is that with early stage ideas, a lot of it's very similar. I mean, you still need to get to the jobs, pains and gains of the customer. You need to figure out, okay, is there any observable evidence of people having this problem and seeking a solution to it? And so while the practices might look a little different and obviously brand is a big thing, like if you're working with like you live or somebody, you can't.
damage the brand in any way, right, versus a startup who doesn't really have a brand yet, but they'd love to. I think why I keep my sanity is at the core of it, it's still very similar work of does this exist? Can we solve for this? Is it painful? Is it urgent? And what you brought up there about the sort of almost like expiration date of the research in a sense, you know, I've ran into situations where teams where they
They want to place a really big bet off of a five -year -old market research report. And I'm horrified by that. And they're totally comfortable doing that. So maybe explain a little bit more why, you know, this stuff tends to have an expiration date and how recent some of the research should be for folks.
Victoria (5:26.916)
I think it's especially important in this current state of things moving, things always move quickly. Changes were always happening. Consumer dynamics were always evolving. But especially in this world, mean, forget about the economy and the political environment or whatever, some of those classic pestle things. There's technology that's very much changing how people are interacting with brands and companies and the solutions that they want. It's insane how quickly, again, these different solutions are proliferating. We can't stay on top of all our competitors.
in our competitive landscape if you're not just thinking of somebody who does the exact definition of what we do, which theoretically isn't anyone else, but there's 150, if not hundreds of companies who do different versions of what we do. As simple as Google and Perplexity, more niche as all these desk research tools, half of which are free because they're just AI -powered and someone had an idea over a weekend and cranked it out. So that's kind of where when you think about relevance and reason, even if you're selling chips,
how people are buying and the different solutions that are coming out and things like searching on an LLM now instead of Google for ideas on like healthy chip brands or help me plan a birthday party and what might I bring and all these components. Like how you even show up and get in the minds of your different consumers and audience members is changing so quickly that there's something to be said where you can be, do you always need to ship new research that's hot off the press from last Monday? No, but also like for
or five months ago is probably going to be old at this point, let alone years ago. So we talk a lot about all the different trade -offs and sort of the level of fidelity or the confidence interval that you might need for different choices. To your point around early stage ideas, if you're just in the divergent stage thinking of a little different mix of options that could take you in a few different paths, you're just trying to see is there a there there and not actually figure out what the there, you know, and the end all be all solution is going to be.
you might not need perfect and totally recent and even hyper hyper relevant to your exact consumer profile or company information as you move down that continuum from the early initial stages to great. think there's an opportunity space here. Great. How might we solve it? What are we uniquely able to do? Is it feasible? Is it viable? Et cetera. Then we start to get insane levels of precision and then we want to test concepts in a primary fashion versus rough white space ideas in a secondary fashion. So it kind of
Victoria (7:56.540)
It depends on where you are in your stage and what also level of integrity you feel like you need and the data you're building your decisions off of. things are only becoming more outdated faster now. And so it's important to stay as relevant as you can.
David J Bland (8:9.922)
I agree, and I think the pushback I get sometimes is people want to be, they want to get it right. They want to get it right, and they want to get it right early. And I think some of the concepts I'm trying to explain now have been more around, you really don't need 100 % certainty to make a decision at this point in your journey. You just need some directional evidence to go down a path. You don't need to get everything right. But I think it's kind of drilled into us even as
I write about this in the testing business ideas book about how we learn through school and how we're kind of conditioned to find the single right answer. I think we maybe underestimate how that mindset sticks with us into business. And so how are you helping people navigate that or maybe explain some of the challenges you've seen where it's really early stage, but people don't want to be wrong. How are you helping people or if you're helping people approach that?
Victoria (9:4.174)
It's a big kind of drumbeat campaign for us and just in the vision and philosophical sense of the company, as well as the very literal, like how we help people live and breathe this in the product. us, curiosity is at the core of it. And to your point, the need to be right, the pressure to be smart and knowledgeable and show up on your, you know, first day of your job, knowing all the answers, because that's what you're paid to do, conflicts with this idea of like, maybe there's things that we have
been asked before that no one knows the answer to and that's okay we just need to be smart about how we find the answer so a big piece of what we advocate for is so many companies are just either reactive I get a question and therefore I either pretend I know the answer or go and quickly find the answer and then I'm off to the next question that I get versus I'm a question machine thinking and provoking and exploring and drawing connections and connecting my internal data with something I just read in an article and figuring out where the opportunity might be in that
Now you can obviously boil the ocean and be too much of an idea machine, but there is another end of the spectrum versus all I have to do is like react to questions I'm getting and have the right answer and then I've done a good job at the office and I've earned my paycheck for the day. So this idea of what haven't I thought about and what other angles could I explore here very much in that philosophy of start diversion and then converge from there. Even our chatbot that we start the conversation with when somebody comes to us with a research question.
It's not like, great, thanks, you want a competitive landscape, we'll be back to you in an hour. It's like, have you thought about these geos? Have you thought about these competitive alternatives? What are you looking to accomplish here? And is there a different layer of question or insight you might actually need? Are you interested in insights from reviews of these different competitors? Or are you looking for a scrape of their 10K or 16 different ways that you can slice and dice the information you might need? And it gets people thinking like, oh, I am so focused on either having the right answer or the most efficient way.
get in the answer even if it's not the right one. Perfect case in point is you google something and are we really going to sit here through 55 tabs and like check which ones are buzzfeed and check which ones are actually reputable authors. No, we're just going be like okay this came up five times I think that's my answer close them all out move on to the next thing. So all to say there's an element of curiosity here that if we can build a muscle and it becomes easier to get answers we are then going to be more likely to just ask the questions instead of I just need an answer I need to like you know defend my job and instead of saying
Victoria (11:33.778)
and yep, I've got an answer to your question, it becomes, we asking the right question? Here's three other ideas for you, and I have a more efficient way to get those either three answers or the answer to the one better question.
David J Bland (11:45.600)
Yeah, I think working with GPT, I've spent a lot more time this year with just OpenAI stuff and chat GPT. And one of the use cases I think is pretty helpful is, well, let's say I have an interview script and I'm looking at it and saying, is there a question I'm missing or a different way to ask this? And being able to paste that or upload that into.
to add GPT and say, what question should I have asked here or is there something I'm missing? And sometimes I'm not looking to write the whole script for me, but it can give me suggestions on, oh, maybe I should ask this this way. Or maybe there's this other question I can weave into the script that might help me learn what I need to learn. I think we're still just scratching the surface of some of that technology.
Victoria (12:29.230)
Are you finding that LLMs are proactively helping you think through some of these almost yes and types of questions or you're having to go to them and say, what am I missing? Ask me what you need to know or what are six different ways of asking or looking at this or are they actively helping you kind of expand your thinking?
David J Bland (12:51.264)
I find out to be pretty prescriptive, but for me, the way I'm using it, at least in my training and coaching right now is I can create something that's almost like an extra team member or an extra little coach, you know, like a side guide on the side kind of thing where, you know, for right now I'm using it to generate assumptions, right? So if I come across a student or early stage entrepreneur who has an idea, but they don't even know where to begin with their assumptions, you know, usually it's like, well, people buy it.
and do they want it and can I build it? But they don't go a level deeper than that. And so I use it to generate, you know, desirable, viable, feasible assumptions for people. And I have to spend a lot of time, either I have a custom GPT or I have to really work through prompts to get it to sort of structure things in a way where it can take, let's say, a sentence that says, oh, here's my idea, here's my target customer, here's the price I'm willing to sell it for. What kind of assumptions should I have?
I have to give it a lot of instructions to start extracting. I don't, doesn't seem to be useful in generating things that I think I can take and do something with.
Victoria (13:58.297)
Yeah, think we're not yet at the point where...
And a constant analogy may be overused as the intern versus like MBA intern where it is still very much, here's what I'm needing from you and I have to tell you what I need from you in order for you to even ask me those better questions. And I imagine we will start to see all of these tools start to move towards being a thought partner. We're hearing about search GPT and some of these different tools coming out from Google that are trying to, proactive might be the wrong word, but just provocative and provoke the thinking. I imagine it'll start to move that direction.
David J Bland (14:32.246)
Yeah, I made a post the other day about how useless Google has become and that I have to put plus Reddit on all my search queries. And I'm not the only one doing that. So there are some hacks that people are doing because the results are so bad. And so I do think we're really interesting inflection point with search in general. So maybe tell us a little bit more about wonder, sort of like your experience of what are some of the risks you all are navigating and maybe help educate our listeners more about what you do.
Victoria (14:38.286)
Right? Nope.
Victoria (15:1.434)
Sure, there are a couple of main areas that we've been...
navigating, rethinking, kind of spending a lot of our time on, especially in the last year or so, are really going down to the bare bones. As I may have mentioned, the first principles around what do we offer, where do we play, and how are we going to win in this post -AI world, whereas we were historically a network of humans. Uber is the same way, but it wasn't like all of sudden every car everywhere was a self -driving car, and what do we do with the humans was the question, but that was very much the reality for our business that if
all of these LLMs exist, and we even need human researchers to help with desk research. The second bucket was then very tactically within the experience of our product and our solution. Once we went through the exercise to say, if this is our supply chain, so to speak, here's what happens from the moment somebody comes to us to submit their research question through when they get the deliverable back with their answer. And there's X number of steps within that. What is the actual experience for different users and how
How do they feel? Like what do they want from? You break it down as simple as like the ingestion process. So we take in their question. Do they want to be provoked with questions? Does it feel annoying if they're getting 15 questions from chat? Whereas if you were to hop on a 30 -minute call with a human who's taking in the context of your ask, you want them to ask questions because then you leave feeling like they heard you and they totally got the depth of your ask and you're gonna get exactly what you need. So it was kind of exploring all these different semantics of where
the combination or trade -offs between AI and human existed and then how that manifested in the product. And then it also translated to the marketing. We've talked about some of these SEO pieces and content in Google, but how do we show up and tell our story for an audience that might not yet be ready to adapt AI in its full sense? Do they need to know or do they care the extent that we have AI in our solution? It varies. There's a whole segmentation here around classic adoption curves. Some people are total laggard, some people are
Victoria (17:7.474)
like you just mentioned AI and your outreach note to them and they want to hop on and just hear what you got. You know, what are you doing? Is it interesting? You know, if nothing else to know, cool new tool. So all of these different components really of our go -to -market and our offer, we really, we rethought and had no right answers. It was not a case where you either can implement something and then the next day know that that was the right direction. And it was also not the case that you could just look at somebody else's playbook because everybody is in the same boat right now. So it's been very interesting to build very,
much build a plane as we fly it, also like iterate on the experiments and the tests, see where we could get data. We didn't really have much historical data because we evolved, you know, so many iterations of our product and our solution on almost like a quarterly basis. So you have, you know, a month or three months of data is fine and it's good, but it's not a decade of data that we can say, here's what's historically worked or here's the volume of, you know, feedback that we're getting or whatever it might be. So figuring out the signals and the directions and where do we get our cues from and what informs
decisions is as much an element of the experiment as like what the actual experiment was as well.
David J Bland (18:14.166)
Yeah, so sounds like.
You're helping people with their research on their early stage ideas, but at the same time, you're iterating through what you offer them as a solution and how you onboard them and how you charge them. So maybe explain a little bit about, know, so we like thinking of desirable, viable, feasible risk. You know, we talk about that a lot. Obviously, I didn't coin that, but I learned it in design school and have since ran with that framing because it seems to work with whatever company I'm working with. That kind of framing holds up.
Victoria (18:21.860)
Yeah.
David J Bland (18:45.524)
You think about desirability is a lot about your value prop, customer jobs, pains and gains, unmet needs, viability being more willingness to pay, but also, know, are the costs incurred to make this happen? And then feasibility is lot about execution. Certainly GPT throws, and AI is throwing, and LLMs are throwing a wrench into that as they're really rapidly evolving, and then that's part of feasibility and your ability to deliver and everything. So I would say think back over maybe like the last year.
Where do you see the risk and how it's evolving and how sort of you as sort of first principles trying to test your way through some of this?
Victoria (19:21.904)
Sure.
Maybe to provide a foundation, I'll just give a quick blurb on the desirability, viability and feasibility as we thought about it. Even as recent as January, we did this exercise doing quite a bit of research on our audience as we defined it, a certain set of personas and job types. The desirability for us translated to what are people who are doing some form of, call it strategy research, answering those questions like where to play, how to win, who are we serving.
where are they really struggling? the kind of intersection of questions that we sized and captured in our research was, what are you doing a lot of? What do you wish you could be doing more of? What do you think your company could be doing better at doing? So that means you're either not doing it and should be, or you're doing it but it's meh. And what's really important to the decisions that you're making and the strategies that you're setting. And that intersection told us like, here's basically the key map of the desirable things that people really need help with to your point around pain points.
The viability then became a question of, you know, can they pay for it? Will they pay for it? But is it something that they're... and this is actually kind of one of the meteor buckets for us because usually the argument here is what's the ROI of doing something or the cost, the opportunity cost of not doing something. So this is where, you know, we can explain what we're solving and how we do it for people anchoring on those jobs to be done, having the pricing be reasonable, et cetera, et cetera. But it's sort of like instead of hiring two more analysts,
you can basically get the equivalent of a whole stack of analysts and they're savvy in all these AI tools and these proprietary databases and we're on demand, blah, blah. But there's just the calculus that we have to, there was a narrative we had to figure out in order to pull on the viability thread for our audience. And then the feasibility was once we took that stack ranked list of really project types that, or types of insights that our audience might want, we had to say, what is desk research even good for?
Victoria (21:22.546)
things like market sizing maybe, but like growth forecasting or some of these white space analyses, there's a functional element of like, here's the lay of the land and here's what the public data says. And there's a very strategic element that unless we're in our audience's heads, we've downloaded from them, you know, what type of investments do you make? What kind of categories are you heading into? We can't take it all the way to the finish line. And so maybe that wasn't from a feasibility perspective, the right thing for us to do. But then to your point around risks, this is where the moat piece comes in to say, what could somebody
else come in and you know there's a lot of brilliant minds or people will just carve out a weekend and crank out a tool, what could they solve and how do we defend against that to be either the go -to choice for our audience from a credibility, from a veracity, from a know rigor perspective, from a subject matter expertise perspective that we get this type of research and of those top ten jobs to be done. We've done the work not only for a decade with humans saying here's what good looks like for a
landscape or whatever, but we're also monitoring academia. We're also pulling in all these different data sources and no other tool either has the brain power or the historical knowledge or the budget to have the proprietary data or just the horsepower and the custom agents that we've built in order to execute that. But that whole risk piece is very top of mind, especially for me when I think about our audience isn't coming to us right now and saying, well, there's an AI tool for what you do. They're not there yet, but they could. if they're going
start getting more savvy, our argument is you could have a very decentralized stack of 15 different AI tools or one agent that does your competitive research and one agent that does your consumer research, blah, blah, blah, blah, just come to us. We've got all the agents, we've got all these tools and solutions. And so don't bother, don't worry about it. But you have to think about where that competitive landscape is moving towards and where are these different buyers. Again, know that in five, 10 years, these millennials, Gen Z -ers who are now totally native in AI, everything,
When they're the ones with the budgets, like, do we need to think five years out? Maybe not, but we do need to be aware of it.
David J Bland (23:29.612)
Yeah, I think with risk, the way we frame it is importance and evidence, right? And that's one of the reasons we do a lot of the assumptions mapping work, which is one of the things that would have to be true that you don't have any evidence to support over, let's say, the next three to six months. I like that kind of timeframe. I don't like projecting too far out because then you start projecting and...
leapfrogging a lot of things to the point where maybe you shouldn't have skipped some steps. So it sounds as if you were really dialed in on desirability. It sounds as if that was from sort of rooted in conversations with your customers. Is that correct?
Victoria (24:5.924)
That's exactly right, qualitative and quant survey.
David J Bland (24:9.299)
Okay, and so you have this kind of constant contact with your customers and then you're trying to back into, how might we solve this in a way where maybe they can't really tell us how to solve this, but we have to infer this is what they're trying to do and here's how we could solve for them.
Victoria (24:25.243)
Alright.
David J Bland (24:26.730)
Okay.
Victoria (24:26.862)
Right. And then the exercise became within each of those top solutions. Once we had a short list based on those factors I mentioned for each of those, what does world class look like? And then we would have conversations as to your point just now. Like you tell us, here's what we're thinking. Like you tell us what's missing. What would your company think about? How would you think about this differently or what data sources approach or deliverable dynamics would you shift based on what we're thinking? And then you look at that in aggregate, A to C trends.
and where the sort of schisms are between the fault lines between different categories or job levels or whatever it is, or just to say here's the 80 % of things that everybody wants and that's what we're going to focus on.
David J Bland (25:9.342)
Okay, so beyond speaking to them, how do you begin to test your way through those top things? Are you doing like little internal tests of, when someone asks us this, we're going to try this call to action versus, or we're going to present the research this way versus this way? how, it sounds like you're rooted in working this way. So I feel like
you should have a lot of different options available to you to just do some quick testing with your customers. Maybe you can explain just a high level and maybe some of those and how you approached it.
Victoria (25:39.342)
Yeah. And I'll be very candid here that there was definitely like lessons learned. sort of were like, let's put a stake in the ground, see how it works. And we'll just rip it out if this isn't the way. The two main things that we did first was the hypothesis that, you know, is AI the way, like people don't need to interact with humans in order to submit their requests. And if the AI is good enough, they can get all their information to us and we can get their research done using humans on the backend, but they don't really have to talk to our customer success team.
That one we quickly learned. There's so much nuance and there's so much learning that we want to be doing that for some of these more sophisticated projects, a quick question, sure, it can go through the AI group. How do we figure out the right intake method that gets us set up for success and has them, again, comfortable with whatever interaction they need to have to say, I feel heard, I feel like my research is in good hands and then I'm going to get back what I need. The second bucket was in terms of specific project types.
The route we originally took was when a given client would come to us for college sentiment analysis or a certain type of project, we literally built a ship around them. We said, here, tell us what you need. We are going to trust that even roughly 50 % of the things that you need, we're going to build agents for, we're going to figure out the manual way first and then automate it or systematize it basically. And we're going to assume that that 50 % will translate to the next 10 people who want a sentiment analysis and that other
whatever 50 % will be specific to you, but at least we know we're having, you know, a success rate and a happy customer coming out of this. I think we...
kind of realize that there's an element of inefficiency. There's learning that comes from that, but there's an element of like, we built this for one marketing team at a social media company. We've built this for a, you know, fintechs innovation team. there's necessarily going to be differences in what they need. And so that marginal 50 % of what's nuanced, like it can make or break the failure and success of our business, let alone theirs, if we're like saying, well, that's up to the, you know, situation of who walks through our door. So that's actually something we're working through.
Victoria (27:49.138)
through right now is what is the 80 -20 that we say 80 % of this is going to be consistent? does that mean we narrow down our audience and who our customer base is to say that we are confident that anytime someone comes to us with a sentiment analysis, 80 % that we're systematizing is gonna nail it and then the 20 % is where we can be differentiating our solution. But there's all, right, it's your balancing, literally your runway, your bandwidth, your roadmap, and just what you can, from a skew perspective,
offer with retention and with innovation at the same time and lots of balls in the air.
David J Bland (28:26.430)
Definitely, think I personally not not in your field, but it startups that I joined early on We were stuck in that same position where we were trying to look for commonalities Across our client base because we knew we couldn't scale something where we're building custom one -off solutions for every single company We were in financial services So it was banks and I knew there's no way we could scale something if we're building a custom solution for every bank And it was very very painful, especially
when we had to move companies that were on the custom solutions to a more standard one. And I was often in their offices trying to describe to them how they're going to be okay, even though it's slightly different what we're offering now. So having lived that, I that startup for like eight years. So was an adventure. But I can see where you're coming from on that, where you're trying to figure out what are the commonalities and then yeah, we can do some a la carte stuff around that, but we need to get our core offering kind of dialed in.
how you get your packages put together and everything. That's really, really hard early on, because no one's gonna tell you how to do that. You almost have to, or you'll get very conflicting advice on how to do it. So it's a lot of trial and error and a lot of balls in the air for sure. So with your risk, sounds as if you're sort of like, you have a really good pulse on your customers and you're kind of like dialing into solutions around them. Like what keeps you up at night when it comes to working this way? Like what kind of things are you?
When you're projecting forward, where do you see the risk shifting to as AI is moving so quickly and how it impacts your company?
Victoria (30:6.190)
goes almost certainly back to what I mentioned before around other.
We are theoretically narrowing down our focus, there is very high potential that there's some niche solution that narrows down into 1 % of what we're narrowing down into and nails it because that's their exclusive focus. And then we start to like play this game of whack -a -mole where we're like, oh, this new thing is out here. We have to make sure our, whatever, our competitive analysis is just as good as theirs. Oh, there's a new industry analysis. We have to make sure it's as good as theirs. Oh, there's another 10K tool. need to go make sure.
it comes back to what our whole portfolio and our moat is. There's also, in some ways, very meta and interesting having come from consulting where the classic line is consultants will never go away because people want someone to tell them what to do or they need someone who's like others, somebody else's head that goes on the chopping block. So there is this other risk, if you will, of...
sort of even beyond the three buckets of viability, defensibility, et cetera, that we talked about, people get all this information in their hands and they have to be successful with it. They have to do something with it. And that's typically where implementation from a consulting perspective becomes really valuable. Here's what we saw. Here's what we know with your business. Here, I'm an expert in your category for 10 years now. And like, here's what I think you should do. And also like, we'll help you test the pricing packages and the yada, yada, yada. We are not in that business right now. We're very much in the here's your research.
to go ahead and like, do it at what you will. But even as going back to the curiosity point, things like regular monitors of dynamics in your category that help you stay smart, everybody wants them. Everybody realizes they're way too reactive. Everybody wants to not be scanning 100 newsletters and trying to listen to 100 podcasts, but just to get smart quickly. But what happens when they get that in their inbox once a week? Then they have to like act on it and like figure out what does this mean? Do I go take this up to my CEO and recommend we change our Q4?
Victoria (32:4.498)
for strategy because that's a whole other can of worms in terms of their skill set as a practitioner. so it's sort of like the human element of this, that as humans are trying to figure out where do they fit in this world with all of this hard grunt work stuff coming to them much more easily and almost in a commoditized fashion, their whole remit changes. And so does it become this consulting support that they get or just them leveling up their skill set? That to me is the biggest question mark because people can't ingest, or maybe the better word.
digest what we're giving to them doesn't matter what kind of like harvest we've delivered.
David J Bland (32:40.329)
Yeah, we had a cohort of startups with my co -author, Alex Ostrowald, we went through, we had a cohort to go through sort of like the testing business ideas kind of processed together. And I vividly remember one that was on like employee productivity. And they thought that if we give this report to senior leadership, they're going to know what to do with it. And they were shocked when
They had no idea what to do. They're like, this is great, but I don't know what this is. I have no idea what to do with this information you just gave me. And so I do think you're onto something with this sort of risk of, yeah, you can provide them research that's really dialed into what they asked for and very insightful. And if they don't put those insights into action in some way, I hate to say in some way you still failed, but in the overall value stream, you didn't have the outcome that you were hoping for.
Victoria (33:30.139)
Right.
which is the altitude or the horizon we need to be thinking on is the job to be done technically is competitive intelligence, but it's actually so that we know where to position ourselves relative to our competitors and then set ourselves up in a position that is defensible against competitors. And that last bucket, that's what, know, she just saved the wheel. That's not something that we can like hand over the plan for them and say, you know, we can provide recommendations maybe eventually and analysis, whatever. Because then there's also this whole other element of there's so many sources and
forms of data and insight and desk research is just one of them. That's sort of like the no -no. Is it someone's asked some version of this question and the answer exists somewhere out there? So we're going to deliver it to your desk in a nice little bundle, but then you also need to take into account your primary research and your internal data and blah, blah, blah. And like that whole beast of navigating and stringing it together is also something companies are struggling with. So it's a media space.
David J Bland (34:27.124)
Yeah, and I was thinking about your comment about doubling, or like sort of narrowing in on a niche, right? This like niche to win strategy where you do something really well for a very specific segment and get traction that way. I think when you were mentioning sort of that bell curve of adoption, and one the things I picked up from, I think it was from Justin Wilcox once, was if you think about people having the problem.
Right. So there's observable evidence, the actual have the problem. And then they're aware that they have the problem. So some people have the problem, but they're not aware that they have the problem. So kind of pulling back from like Steve Blank's work there. And then are they actively seeking a solution to the problem that they're aware that they have? What I've seen and just when I were talking about this is like in that bell curve, some of your early adopters, I don't want to say they're easy, but they have the problem. They're aware of that. They're actively seeking a solution. If you put your thing in front of them, they get it. It's pretty.
clear to them what value it can provide. And also if it doesn't work completely 100 % bug free, they don't freak out. They're just like, yeah, it does mostly what I need it to do. When you go up that bell curve into sort of like, we're thinking about like early majority, late majority, sort of mass market in a way, you might have folks that are, you know, aware, but they're not seeking a solution to it. And you have to figure out, why aren't you seeking a solution? Is it because you tried and you gave up? Or is it because...
You don't think it's a painful enough problem to go seek a solution to. And then you might have people that have the problem, but they're not even aware they have the problem. And so all of your go to market strategy or just intake is more about awareness building so that they know they have a problem. So I think navigating that is, we always think about crossing the chasm and all that. And I get it that that's been kind of a staple for such a long period of time in business schools and everything. But I think there's nuance here when you layer it in with do people have the problem or they aware or they actually seeking the solution.
that you have very different segments here of folks where you just can't go at them the same way because things that work with people to get it are not gonna work with people who maybe don't believe or trust your system will work or people don't even know they have a problem. So I was just thinking through that framework as you were talking and I think being able to apply something like that where you were already amazing at desk research, right? Being able to segment that way, I'm sure if you looked across
David J Bland (36:42.751)
all the different kinds of customers you have. There are some that get it. There are some that you have to do more education. And maybe that's what you were trying to convey with sort of the hand holding, getting on a call with them type experiences. But I'm curious if that framing helps narrow in or if you've tried something like that in the past.
Victoria (37:1.498)
That's exactly the framework from a go -to -market and kind of awareness building perspective that we took early on. Problem aware being.
It's interesting because we knew and on the data further proved out for us everybody in corporate America is on Google like an absurd amount right we're all spending time answering questions and now like you can literally just type in your question doesn't have to be Boolean search optimized and you're getting some form of an answer and so when you know the dichotomy for us was as much I guess maybe the three different buckets is there's like demand gen efforts content that we're creating to put out there the altitude we stayed focused on kind of my recommendation
but also I saw that the company and the solution was going to be evolving. So to start being very specific around here's what exactly we do when it could change in three months or two weeks. Like let's stay focused on the problem space. you know, you are in corporate America, you didn't go to college to learn how to Google and you didn't get this paycheck to go sit on Google for 50 % of your day. And you also don't want to be working nights and weekends to be doing a ton of research that should be way quicker, but also is really important to your job, which we all know it is. So we stayed at that altitude. The second
it was
I guess the second and third are inbound and outbound. And it's been interesting to see like when we outbound and we call people cold, obviously there's people who aren't receptive to cold calls, which is the whole other bucket can of worms, they're articulating our solution is as important as kind of philosophical alignment. Most times there's certain companies who are like, we don't do desk research, like, you know, whatever, we only do primary. Okay, fine. Like that's, that's dead end. There is also the element of, as we tested our pitch,
Victoria (38:42.866)
you do we need to lean into AI or does that like totally irrelevant and they're not like, I'm not like searching for an AI solution. I am hearing that I'm spending a lot of time on Google and I'm nodding my head and I'm interested if you have a better way to be spending my time. But when we get people on cold calls, there's a little bit more education, obviously explanation of who we are, but they generally are like, oh yeah, I see the value here. So that's at least been a good signal for us. And then on the inbound side, what's been interesting is most of our inbounds are actually the smaller to medium size audience where they're really
in the trenches. They're really feeling the pain and these are not questions that they can leave unanswered, but it just is not time that they can afford to be spending a day, you know, on Google or a week or whatever it is. And so just watching the distinction between where people are in their various stages of adoption, but also what they're willing to settle for is the wrong probably phrasing to put it. But you know, they're like, this is a solution, feels like it can do what I need and I'm all for it. Whereas these, you know, outbound calls and these Fortune 500s, of course,
they understand it but then they have to go through procurement and socialize it with a few teams even though they get the business case. So there is definitely a few segments there.
David J Bland (39:52.405)
Yeah, if you think about like the user customer, the person that controls the budget and then the person's a decision maker. In smaller mid -sized companies, they might be combined or maybe two people. But when you get into bigger companies, it's at least three, maybe more. And depending on your price point, you hit a certain magic number that you're not aware of that requires them to go off and get approvals for things. And that makes it more complicated too. So I was thinking through that process of...
What's your value prop to each of them? And I could see why the inbound or the inbound that you're getting from medium sized companies, they kind of get it, right? It's like, I can't spend all my day doing this.
I just go to Google Trends. was like, well, you're going to get some information from Google Trends. And they're like, I can't do all this all day. I can't spend all day just looking for terms that my customers would be searching for and then finding a bunch of related terms and going down that rabbit hole. So I could see it become overwhelming really quickly. And I also see in some of my bigger companies like research. It's not like we have researchers across the entire company that are just hanging out waiting for you to ask them to research them. mean, they are completely overwhelmed and they're usually spread way too
then and the teams I'm working with want to go fast. And so we always kind of balance this sort of this balancing act of we want to include you and your expertise because you're amazing at research. We also have to make some sort of decision directionally in a couple of weeks. And I can't even get on your calendar until like a month from now. So I've been kind of in that mode of just I'm really trying to help teams navigate that.
and say, we generate enough evidence to make a decision and also try to include people, but it becomes really hard to do so when it just seems so overwhelming. I feel, honestly, they just feel bad for these teams that want to go fast and make good decisions, but they feel like they're not supported to do so.
Victoria (42:1.604)
That's sort of where our bet is.
You know, we talked about niching down into hyper specific solutions. At some point, so the continuum here is you can need a big research project, but then have this team that's dedicated in your company that doesn't have time for your research project until next quarter. So then your step is, okay, well, I'm going to go agile here. There's a million different ways I can get all these different insights. I break it down to the different types of insights I might want from a keyword or search perspective. Here's answer to public Moz, Spark Toro, know, 17 other different similar
the web, different tools that I can use. Then I think about my consumer insights. There's another 15 different tools I can use. think about social media listening. And then I'm like, well, shit. I mean, forget about due diligence, saying which of one of those tools in a given bucket should I spend my time on? And what's the cost? And what am I actually going to get? What am I not going to get? But now I suddenly went from this big project that I just have one blocker and a budget and a timeline consideration to all these different tools. our thesis is,
Get rid of all these a million, you know tools and not get rid of but like let's you know You focus on your couple favorites, but then also you can come to us. We can do the scraping We can do the 10 Q and 10 cave mining. We can do the social listening whatever else But there is a scenario. I I just think that like agile is definitely to your point the way it needs to go There's so much information out there that we should be using and we're frankly being ignorant if we're not Whether that's just like newsletters and developments in our category or all the 15
100 different things that we can be looking at, but then we get to this problem where it's too much. It's like, how do I find what's the gold in the needle in the haystack or whatever? And so the tools that simplify and aggregate and then make it easier, yes, through AI, sure, but AI is means to an end and it's not just about quick little easy tool to then compare 1 ,500 of them.
David J Bland (43:55.114)
Yeah, and the challenge with AI is you don't always get the same results back. So I'm always nervous of people just clicking Regenerate until they get the answer they want. It confirms whatever they wanted to build. then going forward, it's a fun yet scary time to be in tech, I think. But yeah, so I mean, I love the journey you mentioned, going everywhere from
Victoria (44:4.964)
Yeah. Yeah.
David J Bland (44:20.393)
Hey, this is how I work with big companies and here's how I'm tailoring my approach to maybe supporting smaller medium sized companies doing all their desk research, but also really living and breathing some of the principles of, look, we have risks, we need to talk about them openly and we need to go test them. And even though we're providing desk research, like we're not immune to this, you know, this way of thinking, we have to actually do this ourselves as well as we get really fine tuned into the solution.
for the niche that we're focused on. I just really wanna thank you so much for going through that journey and just being very open and honest and explaining everything to us. If people would love to connect with you and learn more about what Wonder does or how you could help them, where would they go to contact you?
Victoria (45:5.680)
Yeah, they can pop on to find me on LinkedIn at any time. They can also visit our website is askwonder .com and there's either time to a call or you can see some of our articles and frameworks and resources to the point of the 150 to 100 AI desk research tools. If you don't want to use wonder, we've got 199 other options you can find through. But yeah, LinkedIn or otherwise, our website will be the best way.
David J Bland (45:30.613)
Thank you. So we'll include all those links in our podcast detail page. So if you have any questions about desk research, being able to reach out and contact Victoria and team. I want to thank you so much for sharing your story and hanging out with us today.
Victoria (45:43.044)
Yeah, it's been a great conversation. Thanks again for having me.