How I Tested That

In this conversation, Amer Abu Khajil shares his journey from civil engineering to the tech world, emphasizing the parallels between building physical structures and developing products. 

He discusses the importance of testing and validation, drawing lessons from his engineering background, and outlines the process he created at TTT Studios for product development. 

Amer highlights the distinction between problem and solution validation and offers techniques for conducting effective customer interviews, ultimately stressing the need for a structured approach to understanding customer needs and segmenting them effectively. 

We explore the journey of building Perceptional, a tool designed to enhance user research through AI. 

Amer and David discuss the importance of understanding customer segments, the process of validating ideas, and the balance between automation and human interaction in research. 

Amer shares insights on product development, testing methodologies, and the challenges of creating a sustainable business model in the startup landscape.

Is your innovation pipeline clogged?
  •  Uncover the risks, bottlenecks, and gaps holding your best ideas back.
  •  With the EMT Diagnostic, you'll get a clear, actionable plan to fix them.
👉 Book a free discovery call at https://www.precoil.com/innovation-diagnostic

What is How I Tested That?

Testing your ideas against reality can be challenging. Not everything will go as planned. It’s about keeping an open mind, having a clear hypothesis and running multiple tests to see if you have enough directional evidence to keep going.

This is the How I Tested That Podcast, where David J Bland connects with entrepreneurs and innovators who had the courage to test their ideas with real people, in the market, with sometimes surprising results.

Join us as we explore the ups and downs of experimentation… together.

David J Bland (0:1.099)

Welcome to the podcast, Amer.

Amer A. Khajil (0:3.308)

Awesome, thanks for having me David.

David J Bland (0:5.121)

I'm so excited to have you on because I became aware of Perceptional like several months ago and it really piqued my interest because I'm really into discovery work and how to augment that using AI and everything. But I would love for you to share some stories about sort of what led up to that idea and kind of some of your background to our listeners.

Amer A. Khajil (0:27.396)

For sure, yeah, no happy to share. Thanks for having me on. I've listened to a few episodes, really love this topic about testing things and it's obviously related to some of stuff I do myself. So, very glad to be on the podcast. So yeah, thanks for having me. Yeah, I could talk a little bit about myself and kind of my background. So my background's actually in civil engineering and then I've kind of pivoted into the technology and product world after doing an MBA. I kind of still tell people, I guess, that I used to build buildings and now I build products and...

kind of making those sound like the same thing and they actually are surprisingly very similar. And so yeah, that's kind of how my journey really started early on. But yeah, after going to business schools, doing an MBA here at the University of British Columbia, I kind of landed into the world of technology, let's call it broadly. I had some interesting roles. I've always been involved in entrepreneurship, intrapreneurship, innovation in a few different ways.

One of the first things I did was working as a director of product and innovation at a local software development agency called TTT Studios. And there I was lucky enough to have a role where essentially I got to build startups within that company. And essentially I had the opportunity to launch two startups there, from idea, prototyping, validation, launch and some early revenue and traction. And also got to...

create and I think really the thing that got me started in this world and diving deeper into this world is actually creating a new product development process for that company and kind of really got me to think about, well, what is an idea? How do we validate an idea? How do we as a, not just myself, but potentially as a team or as a company decide, where do we spend our time and money and effort into? And so that was a very formative work experience as part of my career journey.

I then had spent some time at Amazon as a product manager in the global store division there. So again, got to learn a lot about, you know, how a big company works, how to really influence people, you know, get your ideas through, you know, through just even prioritize them inside your team, but also, you know, within a bigger organization. And again, I got to work on some, some new both like new and existing features during my time there. And then that kind of after leaving Amazon led me to Perceptional.

Amer A. Khajil (2:52.301)

I could talk a bit more about the genesis of that and how it came. It actually was not the first thing I was planning to work on when I left Amazon. And if anything, my time and experience in kind of working on validating things is what ended up leading me to Perception.

David J Bland (3:5.349)

That's such an interesting journey. And I'm always curious when I talk to people who they feel like they have these really big pivots in their career, but there's this underlying sort of foundation or commonality, you know, because I've met people that, you know, did urban planning and then they went into business consulting, but it was was sort of like the flow was the commonality there. So maybe you could explain or dive into a bit about.

sort of that underlying connection because you mentioned, well, when I pivoted, it's still kind of similar building products to building. So maybe you can make elaborate a bit on that.

Amer A. Khajil (3:43.459)

Honestly, it is so similar that it's like, feel like every like civil engineer, I don't know if it's maybe engineering in general, needs to like graduate with like a PM certificate as well because yeah, ultimately fundamentally like you're getting to an end point, right? Like civil engineering, we're always working towards, you know, delivering our project, which is ultimately like a physical building. But in order to do that, you know, there's all these different stages where we are doing, you know, working with an architect to plan our initial designs, you know, the structural engineering designs.

Then, you know, once we have like, you know, alignment with our client going out and actually doing like more robust designs. And then, you know, once you get, you know, once you have that, that you're kind of creating specs for the manufacturers, if they're, you know, creating a steel structure or a prefabricated concrete structure. Uh, and then you're reviewing those and then, you know, you're on site, seeing how things get built and ultimately getting that final product, which is a building.

And I got to do some cool projects at the YVR airport here. to this day when I go to travel, I see some of the work I did and so very proud of that work. But yeah, it's so similar to product management and a startup. I'd say definitely closer to product management. There's definitely aspects of it like a startup, but it truly is like you're managing a project, you're managing stakeholders. On one side it was obviously the client and architect and on this other side it is again, often your customers.

you know, development team, your designers. I'd say designers and architects, there's a lot of parallels there. And you're going through these different stages, right? Like, you know, in civil engineering, you're reviewing specs and, you know, designs and things like that. You're doing the same thing in, in, uh, as a product manager or a startup founder. Uh, and then ultimately again, getting to that end point, probably the biggest difference is, is that like, especially within the startup world, you're there to like, let's say maintain things and obviously have them evolve over time. Whereas with buildings, it's a little bit more final. Uh, I guess you can go in every, you know,

Decade or something that to make some tweaks and updates your building but in general once you deliver building it kind of is what it is But you know, it's very similar a lot of project management a lot of stakeholder kind of management Prioritization and just technical design as well, right like a lot of that spending time at your desk Planning, know what you're gonna build and what you're gonna design and how it's gonna work and look and feel

David J Bland (5:56.351)

Yeah, just flew out of that airport a couple of weeks ago. So now you have me interested. I'm like, oh, did I see any of your work there? yeah, yeah, yeah. think, so I'm thinking about, not to belabor the point and talk too much about the parallels here, but it's very interesting to me that you see that connection and the similarities. And I'm thinking, how do you test, know, or as,

Amer A. Khajil (5:59.480)

Nice, yeah, very cool.

David J Bland (6:22.548)

a civil engineer, like how do you start testing without investing a lot there and did any of that help inform your work in tech?

Amer A. Khajil (6:31.479)

100%. So I think the biggest thing I learned back then is anything you do for your work is you're giving options or optionality. Like that was one thing that my manager at the time drilled into me and when we were doing the work in civil engineering and that really translates very much to, you know, a startups validation product where, you know, you don't just say like,

okay, here's a problem that we don't solve, here's the solution. You're like, oh, here's a couple of solutions. Here's a couple ways I could look into this and then go and test those out. And so, you know, some of the examples there, you know, in my civil engineering career is that, you know, as a company, like, so when we would work, let's say with the airport, you know, they would do a lot of early stage studies because they're not going to invest a lot of money into things before they know how much they cost. And so they start off by saying, okay, well, we want to build, you know, a parkade.

to even get a good estimate of what this is gonna cost, we need to see options on like what kind of structure, what kind of material we're gonna build it with, what's the dimension, like the ideal dimension of every section of that parcade. And so like I still remember going and making like five, six, even I think in some cases like 10 different potential alternatives, whether it's like changing materials or changing the dimensions. And then,

kind of bringing them back to my manager, figuring out how we can best summarize them, present them to our client, and then ultimately take that back to the architects and the client to see how do we move forward with this and get better estimates. And that was a project, funny enough, we did work on the early stages of the parkade for YVR, and that was a project where our company ended up doing the study portion, so kind of the analysis, but then didn't end up getting the work for the final project. They did get some.

other big work at the airport, but it wasn't that specific project. But yeah, it did involve a lot of that. you know, I'd say like the big thing there is really the options, like really understanding that, you know, there's not always a definitive answer for anything. You're kind of trying to create and look at things from different angles to come up with different options. And I think for me, that's a big part of like my, you know, when I think about salute, validating a solution, just not coming at it with like one set idea.

David J Bland (8:41.952)

Yeah, I like that. So you take what you've learned as a civil engineer and use that in tech. And then you mentioned TTT and you're finding your own sort of process there. Can you elaborate a little bit on how you think through this process or your experiences there and how that maybe was informing where you've taken it now?

Amer A. Khajil (9:4.070)

For sure. Yeah. So my time with was super interesting because yeah, when I got in, the team had already had an initial prototype for one of the ideas. And so really the first kind of year I was there, we actually were only working on the one concept, which was a facial recognition check-in solution for events and conferences. So that one I got to kind of see from, you know, early prototype, you know, through like, you know, essentially validation, validating the use case, because there was a few different use cases we could do to launch.

And so really the process came after that. And so in a sense, what I was able to do is almost dissect what worked in our launch for that concept, because that concept was successful, to really figure out, like if we want to replicate that kind of success, what can we do? What can we do to make the process better, faster, and so on? And so a big part of it obviously was, you know, spending time on the ideation stage and like, is that, you know, like figuring out even what, did we even want to look into?

But once we did, guess the biggest thing there was kind of trying to separate and then something I'll maybe talk about a few different points in this podcast, like separating validation into just problem validation and solution validation. Because often like those get so mixed up, less so I'd say in professional settings like a company or whether within a product management role or UX researcher role, but in startups founders tend to put just validation in a general bucket.

And really I think the process for us at TT that we created there was saying, like what are ideas, even if we have solutions, but like let's distill them to their core problem, go out and talk to people just about that problem, understand if that problem actually exists. And then once we feel confident, you know, that that's the case, we can go and again, create some options for the solution and go then validate those solutions. And so that was really, you know, a key part of the process. And then obviously we would go into like, okay, how do we...

Then maybe once we get there, do some more prototyping, validated that stage as well before going on to build. And then yeah, a big part of the process there for us, because it was a, let's say this kind of innovation hub within a company, was thinking about how do we align as a team at every stage, right? So kind of bringing back the results of every stage to the leadership team and to be like, all right, here's where we're at.

Amer A. Khajil (11:26.835)

What are we thinking? So there's kind of these check-in and I think I called them gates. I still sometimes use that term of like, you there's these different, you know, multiple gates as you go through the process where we're not gonna take anything for granted. We're gonna try to pause and like question ourselves based on those gates. And in that case, actually like one framework I used there, I can't even, I'm not sure if I'm gonna get the name right, but I think it was literally just called like the Venture Capital Assessment Framework. So like a big part of it was how VCs assess ideas.

And I think that's a framework I had a spreadsheet of and from business school. So again, we took that adapted it to how would you assess a startup idea based on that framework for that gate.

David J Bland (12:11.098)

Yeah, it's very similar to how I approach things as well. I think it's smart to really deeply understand the problem before you jump to the solution. Otherwise, you have a solution, you're kind of seeking your problem, right? And you have a runway and you either find it or not. Quite often when you think about those interviews, one of the misconceptions I find with my team is surveys and interviews I think are a great place to start most of the time. But I think sometimes I hear this

Amer A. Khajil (12:21.908)

For sure.

David J Bland (12:40.071)

When we say validation, hear, well, we talked to a customer and we validated that. And I'm like, did you really validate? We have some directional evidence that exists and I think we might need to hover there. And so I do think speed is always a challenge because you're trying to balance speed and also the amount of confidence you have and evidence you have. But I think the way you framed it, this idea of...

Amer A. Khajil (12:44.349)

Mm-hmm.

Yeah. Yeah.

Amer A. Khajil (12:59.860)

Mm-hmm. Mm-hmm.

David J Bland (13:4.668)

Do we deeply understand the problem? Then let's move on to potential solutions, not a solution. And I think that's a very smart way to approach it. what tips or techniques do you recommend for maybe not getting too excited when you hear what you want to hear just from one interview to jump ahead? What helped you sort of maybe slow down to speed up there?

Amer A. Khajil (13:25.203)

Yeah.

Amer A. Khajil (13:30.278)

I think a big thing is kind of planning out your, you know, what is validated before you go into these things, right? So for us again, having that process created and then, you know, doing this stuff later on really made me think about, you know, before I go do interviews, what am I going to consider to be validation, right? Because interviews are naturally qualitative. And so it's very difficult to say like, like, yeah, you talk to one person to get excited and that's it. And trust me, I've talked to people and like sometimes

the most excited person is the person that will ghost you later on when you launch your product. And so it means absolutely nothing that someone's excited about your idea. And so for me, a big part of it was prior to going into, you know, let's say problem validation interviews, you know, I'm taking obviously like, so obviously we're not going to get too much into the weeds, but you know, I have a hypothesis, I create questions, you know, that will help me validate or invalidate each hypothesis. So every question in my interview or conversation ties back to a hypothesis that I have.

And then I'll try to again, create like a, you know, a feeler, you know, some kind of metric is again, a lot of this is qualitative. And so I think I've done ones where it's like, you know, you know, like, kind of someone who's the most on board would be like a champion. You know, someone who is, you know, kind of in the middle or just below the middle is like a skeptic. And so I can't remember the exact levels, but I kind of created like five, you know, it's almost a scale from one to five for each kind of response.

And it kind of gave them these words that really would trigger where, you know, if someone I spoke with was on board, I'd be like, okay, this person is aligned. This person is a champion. This person is skeptic. And then, you know, after each interview, I'm not only, you know, just thinking about how I felt after the interview, I'm kind of going through the transcript. It's not really a transcript. I'm often just taking notes during these interviews and almost trying to rank per question based on the response or per answer, I guess.

Uh, where do they rank, you know, from one to five on that scale? And then I look at, okay, well, that question was tied to a hypothesis. So for that person, you know, for hypothesis one, two, three, how do I feel? How, know, did they validate it invalidated or like, it inconclusive? Uh, and then, you know, that's for one interview, but then obviously before I go into all this, I have a target to say like, you know, I want two thirds of the interviews to validate my hypothesis or something along those lines.

Amer A. Khajil (15:50.675)

And again, that level of validation. what is validation? The way I like to describe it is like it is depending on you as a founder or as a product manager, let's say, and your risk tolerance. So if you're like, you're very risk taking, you could say, you know what, I'm happy with the 50 50 hit rate. If half the people I talked to are on board, that's great. Uh, if you know, half or not, that's, I'm okay with that. But if you're, risk averse, you might say, you know what, I really need like eight or nine out of 10 people to be on board with, you know, this, you know, to help me validate that hypothesis for me to move on.

And so that's really how I've kind of created my own process. But it really is just being that granular with every step. And like I said, I think the biggest thing is planning, sorry, creating your scorecard, let's call it, before you start the whole process.

David J Bland (16:40.478)

Yeah, I think it's smart to tie it back to a hypothesis. mean, that's what I recommend as well. We do something like assumptions mapping, and then we get the most important assumptions with least amount of evidence. And those usually involve the customer early on. So getting those and then saying, OK, we're going to go off and test to see if we need evidence to support or refute that. And we're going to need a script. We don't have to read from it, but we should actually think through what we're trying to learn. It's not just hanging out with people and talking, which I think sometimes customer interviews end up

Amer A. Khajil (16:52.496)

Yeah, yeah.

David J Bland (17:10.381)

being, unfortunately. And that doesn't really help you come back and say, wait, what did we learn from that? Did that support or refute? And even if it's directional early on, you don't have to have 100 % confidence. But you do need to tie it back. So I love that you're tying it back to a hypothesis. so, yeah.

Amer A. Khajil (17:26.246)

Just a quick comment on that too, especially with problem validation, because problem validation often, with solution validation, you might be coming with like Figma designs or a prototype or something to like get feedback on. But I find with problem validation, it's often like you're asking them about like what they do in their job, like how much time do they spend on certain tasks or, and so those conversations can sometimes go off the rails. And so really being clear with like what questions you have.

you know, maybe prioritizing your questions often, like before I go into these calls, you know, I'm thinking, okay, like, you know, these are the must ask questions and these are kind of like, okay, the nice to have followups is really important when you're testing that, like at that problem stage.

David J Bland (18:8.864)

agreed and then looking in, like what you said about having them talk about the past and not just future hypothetical situations. So how much they've spent trying to solve this problem or did they solve it or not or what other things did they hire to do this, you know, from a jobs to be done perspective. But I was thinking through.

Amer A. Khajil (18:14.417)

100%.

Amer A. Khajil (18:22.873)

Mm-hmm. Mm-hmm.

David J Bland (18:27.598)

the sort of like, these champions or not? And one sort of maybe anecdotal story I heard once, we were working in lab out of San Francisco and someone said, it's like Vader's Wookiees and C3PO's. like Vader's are not gonna be your customer and they're gonna be racing up to gold. No amount of convincing is probably gonna tip them over. Whereas Wookiees are kind of like in the middle when they're kind of stubborn and eventually they might come over, but you have to do some convincing and nurturing over time. And then the C3PO's are like,

Amer A. Khajil (18:41.466)

Hmm.

David J Bland (18:57.505)

just convert super easy, and they get really excited. But then again, you have to nurture that because later they might not show up at all. So I love that. That story always stuck in my head of, okay, yeah, is it a Vader or a Wookie or a C3PO? But I like that because I mean, you end up with so many different sub segments and anti-segment sometimes that it's hard to like put them all together into one.

Amer A. Khajil (19:4.858)

For sure.

Amer A. Khajil (19:12.458)

I like that.

David J Bland (19:22.681)

let's say Persona or Value Prop Canvas or Empathy Map, whatever tools you're using. So I love that you are very methodical about it and you're really trying to find out how to fine tune.

your solution or your value prop to the potential segments. And that happens quite honestly a lot when we go off into these interviews, we think, oh, we have a target segment, we're going to interview like 15 or 20 people that segment. And then you realize, oh, actually, half of those were sort of this other segment. then you start, you have to be careful because you have to apply some visual management to that and some process to that, or it could get really confusing in your head sometimes.

Amer A. Khajil (19:54.512)

you

100%. Yeah, I think for me, like spreadsheets, I like, I document way too much for especially like solo founder often, like I just, I need to write things down and have them organized in spreadsheets to like organize my thoughts. Otherwise it gets really messy. And so yeah, that's, think my go-to thing and it may be bit of a super powers. I, I, I, I'm one of those people that loves documentation. So, but you know what, something I will say there too, it's like, it's easy to just kind of get stuck in the process. And so, you know, definitely

I think a key thing here is like, you know, each person is different. And so like kind of use the system. I like to kind of encourage people is like, use the system, have a system and use the one that's going to make you take action. Right? Because again, I think a lot of people will come talk to me and like it's something might, you know, certain things I say might feel like pretty complicated to them or like, you know, just too tedious. I'm like, are like, what, what is, what can you do? What can you do to feel more confident? Cause I can ultimately validation and add validating something doesn't mean you're going to be successful.

in again, it's just, it's an, it, the way I describe it, it's an act or exercise in de-risking moving forward. And so, okay, like, you know, if someone, uh, it's, know, it's, it's, a complicated thing for them, you know, the whole process and all that kind of stuff. Not everyone's like a UX expert or UX researcher. And so like we say, okay, like what, what can you do? Like, what do you think you can do and achieve and accomplish to feel more confident in moving to the kind of this next stage, uh, to start. anyway, I think that that's fine. You know, you need to start somewhere. Um,

And now that's kind of really what we encourage. I've encouraged some of the founders that I've worked with to do is just like really think about how do you break it down into little pieces.

David J Bland (21:33.138)

I like it. I like breaking it down in smaller chunks so you can attack. So it sounds like TTT was very influential in your thinking, or maybe you started to codify your process a bit after that. let's fast forward a little bit to Perceptional. And so where did the idea of Perceptional come from? Was it some of your TTT experience? Help us understand sort of the beginning of Perceptional and maybe explain a bit what you're doing there as far as testing goes.

Amer A. Khajil (21:59.600)

For sure, yeah. So, Perceptional kind of wasn't the first idea I had when I left Amazon. So, when I left Amazon at the time, I kind of left being like, okay, you know what, I've done startups before, you the big, big corporate, you know, might not be the best fit for me at that point in my career. So, I was like, you know what, I kind of want to go back into the startup world and this time around try to do something on my own. Because obviously with TTC, I did, you these startup ideas within a company again. And I was like, you maybe I want to give it a go on my own.

And so I left with a huge backlog of ideas that I had kind of accumulated over the years and kind of took adapted the process I created at TTT to start really thinking about, well, how do I go about validating ideas to meet my own threshold of acceptability, where I can feel confident moving forward with one of these ideas. And so, yeah, I really just started working through three or four

concepts across e-commerce, climate, I'm trying to think what was the other one, event technology. So was kind of stuff that I've either been interested in or played with in the past in the product or learned about it in my past work. And I had a couple ideas in those spaces and again, went through what I described earlier, right? So I kind of created, okay, well, what's my hypothesis? I've seen maybe seen some industry reports that say this is a problem, let me go validate them and.

You know, again, I went through that, that validation process I was describing earlier. Uh, but then I started realizing like, you know, I'm using AI in these different, at these different points in time. Cause the, you know, a lot of the open AI stuff didn't exist when I was at TTT, uh, at all. And really kind of came up, uh, over the time I was at Amazon. At the time I actually at TTC, do think the GPT 3.5 was kind of starting to creep up. We're starting to see the earliest, earliest products were using GPTs, but chat GPT itself didn't exist.

And so anyway, so as I was validating those different ideas, I was looking at like, well, where am I using AI in my own workflow? And for me, was, you know, just again, different places. It was a little bit for generating questions. It was a little bit summarizing some of, you know, synthesizing a couple of some of the feedback from the interviews that I was doing. And yeah, really, as I was kind of doing some of that stuff, I was like, okay, well, where can I use it?

Amer A. Khajil (24:15.628)

where it's not just as easy as copy pasting into chat GPT and like just build another chat GPT wrapper. We're going to, you know, implement it somewhere that is a bit more bold and maybe a bit more risky and not going to be palatable for everyone. But like, uh, you know, a place that would be kind of an interesting challenge. Cause again, I didn't want to take a risk with this next, you know, ID or concept. And that's really where the genesis of perception was. was like, well, why don't I try to use AI to automate the middle part, which is.

the conducting of the interview. And the way I was thinking about it is, let's see how close to a person this human interviewer could be, knowing it's not really gonna be there anytime soon. But the hypothesis, I guess, for me there was we can use some of the foundational technology to make something that's better than a survey. And that's really how it started is, I looked at the work I did that summer.

reflected on, know, that I've had that problem before, you know, at TT, like, you know, even at Amazon, where we've talked to customers and things like that. And I thought, okay, well, why not try to take a chunk out of that middle part, knowing again, we're not gonna replace human interviews. That's not really the plan anytime soon. But like, can we create something that's better than a survey? And yeah, that's how Perceptional came to be.

David J Bland (25:30.512)

I love that story and born out of a need. I love how you applied your own process. I think sometimes we talk about processes, but we don't apply it to our own stuff. And that's very humbling. Most of the time, we're like, Ooh, maybe that wasn't such a great idea after I tested it a little bit and tried to validate it. So, so you said something really interesting there with regards to almost

Amer A. Khajil (25:41.773)

appreciate it.

David J Bland (25:51.966)

almost like augmenting or automating some of the middle parts. So maybe you can get into a little bit of detail about what that means and what that entails and how something like Perceptional assists you and how it's better than a survey.

Amer A. Khajil (26:5.523)

For sure, yeah. and yeah, remind me, we can also touch a bit more as well on like how we tested Perceptional, because the story there is pretty funny as well. But yeah, like the way it currently operates is, as a product manager, startup founder, market researcher, those are, I'd say are our main like three customer segments that use Perceptional today, they come into our platform.

they're able to create essentially a chat bot. And the way we describe it is you're creating a chat bot that's good at asking questions, not just answering them. And you provide some context about your product, product or service. So that's just a kind of a description, product name, the objective of that study, and the series of questions that you want to ask or that you want answered. So again, very similar to setting up a survey, but again, you're providing it with that extra context of like,

what's the overall objective and a little bit more context about the product. you know, our, the chat bot that you launch has that context. Uh, and yeah, we essentially create a chat bot instance for you and then you send that link to whoever you want to respond to your survey. And so, yeah, the customer, you know, the end, the respondent gets this, they, uh, once they, you know, uh, put in their email or name or, know, depending on what, what demographic information you want to collect upfront, uh, they just start having a conversation with the chat bot, the chat bot asks questions. So if.

The way I describe it is if the respondent is, you know, like the, like doing what they need to do, it actually isn't different from a survey. like think about some surveys where you have respondents that are amazing. They fill out every text box. They're, you know, describing everything really clearly. We actually don't add any value there. I'll say that. But for, you know, the big percentage, you know, that we see in a lot of surveys where people are just like putting in one word answers, that kind of stuff. That's really where we shine.

And so if anyone's being very vague, if anyone's providing very short responses, the chatbot, you know, contrasts to nudge them towards private providing a more, uh, you know, elaborate answer or more extended answer. And so, you know, it's, just asking things like, can you elaborate more on, you know, elaborate more on this? Can you provide an example of this? Uh, so really trying to take characteristics that we often associate with a human interviewer and pull them into this chatbot experience.

Amer A. Khajil (28:23.116)

And so that's really kind of that middle part that we focused on fine tuning our AI model to do. Yeah, and a key piece there as well has been to make sure that at the end of the day, we also are able to analyze all those transcripts. So once you have all those transcripts, you might have 10, 20, 30, 40. There you have the full transcripts. Now, how do you summarize them? How do you summarize each one? How do you summarize them all in aggregate as well?

David J Bland (28:50.746)

Okay, so like any good interviewer, you would dive deeper, right? If you had a one word response to one of your questions, it's like, oh, could you explain a little bit more what you mean by that and everything? So it sounds like you're trying to take that and put that into this sort of more automated experience. I do like that. So it helps me understand a bit more. Go ahead.

Amer A. Khajil (29:9.840)

And one thing I'll touch on here as well as like on the product itself, because so that was like, obviously before we built it, that was the assumption. And like, you know, we just did some quick testing and things like that, you know, of the chatbot experience, but we were able to A-B test after we launched and, know, we did some, some of our own studies where we gave, you know, the same questions in a survey format and a chatbot format.

and saw how much information came back. And so we've done these studies where we saw that with Perceptional, we got like four to five times more qualitative responses. So like literally like more text or either more characters or more words than the same questions being asked in a survey.

David J Bland (29:52.837)

Okay, I like that. coming back, maybe stepping back a bit, what you mentioned before about how you tested, maybe can you share a little bit of how you tested the idea of perceptional? Like what were some quick scrappy tests where you were saying, what's our hypothesis and let's go test that. Maybe shed some light on how that came to be.

Amer A. Khajil (30:12.373)

For sure. Yeah. And so it was funny to test because yeah, like I was creating a tool where I was like trying to say, I want to create a tool that will automate the thing I'm doing right now, help me automate that process. And so it was always a little bit meta, uh, you know, try to describe it to people. But yeah, you know, I went really through the process I described earlier. It's just started off with the problem. you know, having been a product manager reached out to, you know, fellow product managers that I knew and I didn't know, uh, interviewed them about their workflow, got to talk to some UX folks as well. Cause they're obviously also involved in that process.

And yeah, it really went through, like, tell me about your process. How do you collect information today from your customers? You know, if it's qualitative, you how do you collect it? it, know, surveys, interviews? And so really kind of tried to understand how they currently do things today and then like what their biggest pain points or time sinks were. And, you know, that involved things like, you know, scheduling meetings, rescheduling meetings, like a lot of user interviews.

or customer interviews, you're not really the priority. So a lot of those tend to get rescheduled and shifted around. We heard the same problems with surveys that are very common, which is people maybe start and don't complete it, so we don't have the data, or people aren't really providing enough feedback or context. Yeah, and so that was something that we kind of saw come up. And then after that, once I felt confident that this is a problem that I wanted to dive deeper into, really created a pretty lightweight...

chatbot experience. using kind of the open AI playground. And then we, I also had like some initial Figma designs and really in those solution interviews, I was able to show the Figma designs and those, that like a chatbot experience to people and people got to even play around with it a little bit. And actually that helped even almost get us through some of our early product development thinking about like, what are the edge cases? Like when people see a chatbot, what are they likely to type in and how do they, how do they want to trick it? What's their instinct to trick it?

And that really became our kind of early validation as well.

David J Bland (32:9.951)

Did you notice any difference between, you said, Figma as a of like a prototype and something more interactive? What kind of difference do you notice when you're testing with people and talking to people from a passive experience where they're looking at something versus interacting with something? Maybe share some of your insights there.

Amer A. Khajil (32:34.087)

Yeah, I mean, think it's like, think obviously, like the more real the thing is, the better. And so that's really when you get like the more realistic feedback I find. So because yeah, a lot of the times things are going to be pretty conceptual when you're validating a solution. It might be a pretty like, I don't say like generic, you know, when you're doing some of of this validation. And so, you know, like Figma designs are great. I mean, even in Figma, can do prototypes where you still walk them through a whole process flow.

But yeah, anytime that you can kind of come to a higher fidelity, it's really gonna be, you know, like gonna help and add a bit more value, I think, and really kind of make things sound a bit more realistic and be a bit more realistic. And so, you for us, was that prototype, but you know, I could have even gone, you know, further step where I even did like a, I don't even call it an MVP, like a even lightweight kind of MVP where we go out and validate that as well, but.

For me, it was really combining that Figma with the, it really was just a basic, very basic one-pager chatbot that was kind of hard-coded in a sense, and that's really what we were validating and testing out in those interviews.

David J Bland (33:45.244)

Okay. And so you get more confidence that you're on the right track by how people respond to those. And then you move on to closer to where you're at with now with perceptional. What are some of the big things you're trying to test now? If you can share them, you know, what are some of the big hypothesis you have now with fast forwarding to today with perceptional?

Amer A. Khajil (34:7.849)

For sure, yeah. Yeah, so think it's for us, for me, it's just been about how to make the product better, right? The products out there, we've had customers. think there's so many things actually, mean, it's a ton. So everything from the business model I've been questioning, to how people sign up, to how we would make the product functionality better. So I think for me, like one big learning so far has been, we are still, I'm still doing a lot of handholding with our customers. So even though it's a SaaS product,

Because I think it's in the user research or product research space, people are still kind of wanting to have someone walk them through of like, well, what are the best questions to ask? Like, how should I be using this? What should this replace? Or try to help them kind of fit it into their workflow. And not really what that has looked at is for a few companies, like I'm helping them almost do like their.

you know, working with them, with the founder, with the researcher to kind of create that first draft of their research plan to be like, all right, well, this is how you put perception in your research methodology. That's really how it's looked. you know, essentially working on something like that, but really now for me, it's been thinking about, how do I create things in the product that help you do that? And so, you know, that's, we haven't really implemented these, but that's included everything from helping you, you know,

generate your questions, for example, or a draft of your questions based on your objective or maybe based on uploading, you know, the work you've done so far, if you have a research plan to help you like, understand what kind of questions you want to ask. So that's definitely like one thing to think about. And then also just thinking about, so that's, I'd say like one area, I'd say the other area where I comment on the business model is like, a lot of individuals, even I'd say small medium companies sometimes don't have ongoing research needs every single month.

And so because we're all like a monthly subscription based solution, we're finding sometimes people will join, join for a month or two and then kind of churn. And sometimes they'll come back months later, but the business model currently isn't conducive to kind of that long-term customer that you can easily nurture. so, yeah, playing around right now with potentially some type of like usage-based or freemium usage-based model. And then, you

Amer A. Khajil (36:26.120)

don't even get me started on the features, because yeah, there's still so much stuff to build. You know, we're still a pretty new company. And so yeah, really looking at, you know, understanding how we would go about testing things like an audio mode or, you know, testing, if you want to test TYs within our platform, how we do go about doing that. And all that would go through kind of a similar problem. I'd say on the problem validation side, like everything I just mentioned, I feel confident is a problem because people have told us in one way or another through the product feedback.

whether it's through emails or follow ups that we've done with them, those are things they'd like to see. And so I think for the next step is validating, okay, what option, kind of going back to the beginning of our conversation is like, if we to create a solution for this, how would that solution even look like? if people are aligned that it is the right way to do it, then we'd go ahead and build it that way. But yeah, so many, I mean, think with any startup founder, there is a plethora.

of things out there to improve. And it's really about like, prioritize, prioritizing them and understanding, you know, where do they add value to the ad value to the business, the ad value more just to the customer. And like, maybe that that has some ultimately benefit to the business. And yeah, it's it's there's no shortage of experiments to run run there.

David J Bland (37:42.246)

as it startups don't really starve. It's almost like you drown in a way. There's so many things to do and so many possibilities. think one of the ways we frame it is desirable, viable, feasible, right? So a lot of desirability is value, profit and customer. Viability is a lot about the cost and revenue and feasibility. It's like you're backstage, like execution, obviously your team and all that. So it feels as if, you you do have some validation on the desirability side and viability you're kind of finding your way with. And of course, you could spend more time making

look better and perform better, but is that the most important work to be done? That's always that trade off as a founder that's really, really difficult to do. But it sounds like people really value your sort of human touch or at least your deep domain expertise in this process. And maybe there's some productization of your like your brain there that goes into the product. But I'm wondering, you know, I'm wondering if it's just people feel more comfortable with

Amer A. Khajil (38:27.431)

Mm-hmm

Amer A. Khajil (38:35.386)

Yeah.

David J Bland (38:41.943)

a human being involved, regardless if they're automating things.

Amer A. Khajil (38:46.251)

100%. I think there's something there as well. I mean, it's interesting to see like one thing again where we could do some further testing is like the people who are actually completing the cert of the, you know, perceptional interviews, what their experience is like. And so maybe it is just about getting feedback from that cohort as well. But for our customers, you know, they want to automate something, but they kind of want to make sure they do it the right way. It's kind of how, how is my sense. And like I said, you know, at the beginning, like

you know, from day one, I knew that we were taking one of the riskier parts of user research, right? Like this is a pretty sacred part. And that's again, why I tell people like, do not assume that this is going to replace, you know, you should still go out and do interviews. Really, this is right now somewhere in the middle, probably for most companies is going to be a closer fit to replacing your survey qualitative survey to get like richer feedback that way, maybe to get you to do fewer, you know, user interviews one on one.

So if you feel confident from the responses you're getting in Perceptional. And so yeah, I'm very myself even very wary of automation. And like, as you can tell in some of the stuff I described, like, you know, for some of it, I'm still out there talking to people, even though I have a product that's supposed to do that, right? I still go out to talk to people and I use our product for the more basic stuff. I, know, for a lot of like, you know, for feedback, you know, customer feedback, feature ideas, that kind of stuff. But if I'm going to go validate solutions, I'm going to go out and talk to people. And so.

It really is about understanding when to use these tools. I think that that's critical. And I mean, this is not just relevant for us. It's really for almost any AI tool these days, is you as a user have to understand the limitation. Because otherwise, yeah, can you use it? Yes. But should you? Probably not. Should you use it exclusively? Probably not. And so I think that's a key takeaway with automation in general.

David J Bland (40:33.418)

I like how you're approaching it though. It's almost like an extra team member or a way to automate some of the stuff that gets in the way of what you need to learn, but you're not necessarily advocating it replace critical thinking or you automate everything to the extent where you're not making, you're just making decisions based on what the AI spits out. But I like how you're going after one of the harder parts of the process that causes turn or causes some headaches for people trying to do the discovery and validation work. So I think that's a very smart approach.

We covered everything in this interview. So we start with like civil engineering, which I didn't necessarily think we were going to dive in on, but I love the parallel between that and product management. We went through your time through TTT and through Amazon and how some of that influenced your thinking to the point where you wanted to create your own thing with Perceptional and how you tested Perceptional. Such an amazing journey. And I think there's so much more that you're going to test and learn even after this session goes live.

If people want to reach out to you, what's the best way for them to get in contact with you?

Amer A. Khajil (41:37.667)

sure yeah the best way to find me is probably LinkedIn so it's just if you search my name easy to find there and then if you want to check out perceptional our website is perception dot al so the whole name is kind of in the in the URL there so perception dot al but yeah if you want to reach out to me directly LinkedIn is the easiest way have a pretty unique name so tend to be pretty easy to find

David J Bland (41:57.427)

So if any listeners are struggling with your surveys and getting one word answers, or you want to start automating some of the more mundane parts of your discovery and validation, then definitely check out Perceptional. Amer, I just want to thank you so much for being transparent with us and sharing your story and how you're testing things. And thank you so much for joining.

Amer A. Khajil (42:17.721)

Thank you so much for having me, David. Have a great day.