Hello, and welcome to the bottom of skills podcast. I'm Mike Parsons. I'm the CEO of
QualityNet and we are at this second last installation of our design thinking series. And in this
episode, we're going to talk about how to test your product. With recruits. Now this is an
essential part of design thinking.
Everything you do. Should be tested, should be validated with users. And what we're going to
talk about is how do you get people together to test your product? And I'm going to focus more
on the latter stage. So I'm going to assume your product is starting to come to life, at least in
wireframes, maybe in envision or Adobe XD, and you really want to get some feedback now.
Have you listening [
00:01:00] to this and you're thinking, Oh, I need something a little bit more
earlier stage. I want you to go back a few episodes in the podcast. Cause we go quite deep into
how to do quantum qual research at the very beginning of the journey of building a new product.
And of course, if you'd like to know.
Much much more and go a lot deeper. You can get our free masterclass@baltimore.io, which
goes extensively into how to use survey monkey. There's a masterclass on that. We have
obviously the design thinking masterclass and we have a rapid prototyping, a massive, so plenty
of there are much, much more, all free.
Go check it out. All right. Let's get back to user testing. The first thing I wanted say is you should
be testing all the time. Yeah. Yeah. If you're using an agile team structure and doing sprints,
then every year, two weeks, the work should be tested before it goes to the client. Um, it should
be tested before it goes to the stakeholders.
It's really, really important to always be checking in because you don't want to build a house of
cards. You don't want to [
00:02:00] assume things work when they don't and build sprint after
sprint. And at some point, you're going to realize you're wrong and you're going to have to pull
back all that work. So it should be before you launch during launch and after launch, in fact, a
testing paradigm or use a centric paradigm, or even brought us to a design thinking paradigm,
you should be testing.
All the time, all the time. Okay. So we've got some good material to test. We've got some sort of
clickable prototype. We're want to get some feedback. I think the most important starting point
that I can share with you is you've got to go out and get a. Testers who are in your target
demographic target user case, uh, the persona, the archetype, however, you're defining your
customer.
Please make sure that they meet that criteria and make sure [
00:03:00] that you can test them
in a fair and legitimate way. Sir, make sure that there's plenty of time explain you're doing a test
and don't rush through it. Make sure. That you have time and space to test properly with the
right people. Now you might think to yourself, this sounds incredibly straightforward and
sensible, but you're right.
But here's the thing in the rush of trying to launch a product, build a company, build a product.
We often skip over these things and we don't do those basics. Right. And if you don't have the
right person in the test, well, All the testing sites are invalid, or if you have the right person, but
you haven't created enough time, you haven't made sure everything is right.
You've also at the time. So less, let's kind of, it's getting to now, you've got the right person, you
know, you're testing or someone is right. Riding the target zone for your new products. I want to
take you through how you might. A test with them. Now, the core of product testing, [
00:04:00]
my strong recommendation would be to do task based testing, meaning set the user a task and
watch them as they attempt to complete the task.
Please avoid it. All costs falling into some sort of diagnostic chitchat. You should have done that
ages ago. This is raw. Black and white binary. Did they complete the task? Were they able to do
it fast? Easy. And did they do it to their satisfaction? So task based, make sure you say, I want
you to find a restaurant.
I want you to book an appointment with a hairdresser or Ababa, give them a specific task like
that and then watch how they might go about it. Now after you give them any sort of task based
testing, you can ask them all sorts of questions. And we'll talk about some of the best ways to
do that, but I want to give you [
00:05:00] three types of task based testing that you could use.
The first one could be usability testing. And that inherent thing that you want to do with usability
testing is can you remove as much friction from the journey, from the flow, from the experience
as possible? So that's usability testing. Now you might, uh, have user groups with special
needs. Let's say they might have some sort of visual impairments.
Maybe they might be, um, Using your product in a high noise, high distraction environment,
there's all sorts of things that could affect how you qualify your usability testing. And you want to
try and create as close to real life conditions as possible. So that's usability testing. The other
thing you can do is you can launch your whole product to a closed user group.
Maybe a traditionally may have called that like a beta group or Biddy test is. Um, you might
even build a whole MVP and give it to a small cohort of people. That's another way you can
[
00:06:00] look at task based completion now. What you might have to do as you move into this
more broader type of testing, you might need to use tools such as Mixpanel, um, optimizing your
Google analytics to look at it.
So I would always want you to have some usability, usability testing in person, really to get that
viscerally side of how people are completing tasks now. As we were talking about this, you
might do a combo of in-person usability, testings, uh, you know, have a beta version. And as the
product goes live, you can continue to test.
And what you could do is using tools like Optimizely, you could run different tests where you
serve content or interactive elements in different order, in different ways, for different types of
users, based on IP address and location. Might do all sorts of different variations. I think what
you want to be able to do is look at their PA capacity to [
00:07:00] complete an event, to
complete a task.
This has always been the gold standard for me when I'm working on a product, if it's useful at
all, there's a task involved and you're helping people get that done. Now, if you're doing this in
person, You want to take, uh, transcripts, uh, you want to ask questions at the end of the
experience, um, and you want to.
Uh, record that, um, with your memo app on your phone, you want to, uh, transcribe it, make
notes, um, for sure. Um, that's really, really important. If after the experience you ask them a
series of questions. Now, the other thing you can do is you can run a net promoter score survey,
very simple. How likely would you be to recommend this product to friends and family?
You can, um, do that after the test and if it's a no and category a or even after you explain the
idea, you could ask an NPS, [
00:08:00] then you let them do the testing. After the test, you
actually redo the MPS and see if you can create a shift in the advocacy because after the
experience you might see your capacity.
To increase advocacy, sharing and word of mouth. So NPS is a, is a very, uh, Powerful thing to
do after if you want to get into like a really heavy corn statistic, SQL approach, what we would
do use on any of the three tests that I mentioned would actually look at task completion rights.
This would mean that if your.
At the latter stages of, um, product design and development, we would be wanting to look at
very high percentage of completion on your task. So just to be clear, your, uh, providing a
product and it's called task. Can I need, can be completed by less than let's say, half of
[
00:09:00] the users in testing, you've got a huge problem.
You've really got a big problem. And you want to be as close to a hundred as you can be. And
like say 40% would be an absolute shocker, um, because you're, you're way off where you need
to be. Yeah, everything that you capture in your testing, you can put that into a spreadsheet or if
you really want to go full on, I would always get it all into dovetail.
You will have heard us talk about this a lot more with being a source to house, all of your
content qual research. And that's really important because if you have one repository with a
lifetime of your product creation, with all the data in there, you can know if you've really got
something. You've got all the data to back it up.
And this is really important because if you feel like you've had product market fit, you want to be
able to show the data you want to show the quotes, the charts that all back that up. Now, the
last thing I want to give you here, it's a little bit of an extra goodie [
00:10:00] it's closely related
to NPS. It's called.
S U S a is the system a system usability scale? Uh, it's been around for quite while now. I think
it's like 35, 40 years, thousands and thousands, and thousands of usability tests have been
done using this. And what's great about this scale is that so many. Tests have been done. Um,
you get a Mark out of a hundred for your product based on how the user answers the question.
And you can know if the usability is acceptable, marginal, or acceptable, and, uh, it's very.
Powerful, because it just gives you a number it's highly tested. It's scales really well. It's easy to
manage. You can use it on one or a hundred people. It's actually super, super handy like that.
And it's a bit of an industry standard to be quite honest.
So if you really want to go old bells out, you could, you know, do some usability testing,
[
00:11:00] do a sus score, do an NPS score, throw it all in dovetail, and you would have an end
to end picture of the testing of your product. And frankly, You know, testing is so powerful
because it helps you get signal through the noise.
There's so much information you process as a designer, creative builder of products, or an
entrepreneur for a business or a leader in a large corporate organization. Having these simple
tools, these simple models, mindsets to do your testing with real people, you can go a long way
and this he's the real truth.
If you continue to test with your user and don't go guessing, but actually know that they like it.
Don't fall in love with PowerPoints, fall in love with user testing. If you do, this is very, very hard
to get off track. Not for very long. Every time you test, you'll be brought crashing back to reality
and that's the power of [
00:12:00] it.
So if you can bring your humble self to the party, you can just learn listening from your users
and build a truly great product. All right. I hope you've enjoyed our second last episode of the
design thinking masterclass. We've been chatting about how to test your product with recruits.
And if any of this has fostered interest, jump over to bottom up.io, you can get free master
classes on design thinking.
I know a whole lot more. All right. Thanks for joining us here on the bottom up skills podcast.
That's a wrap.