A no-nonsense (well, some-nonsense) podcast about API design & development, new features in the world of HTTP, service-orientated architecture, microservices, and probably bikes.
Hello everybody.
Welcome to another episode of APIs.
You Won't Hate the podcast version.
Today I am joined by Tom Hurst and
we are gonna talk all about mocking
and not taking the piss outta people
nowhere to talk about API mocking which
is a whole different kettle of fish.
So, Tom, hello.
How you doing?
Good, thanks.
Yeah.
I'm really pleased to, to finally,
finally manage to get this organized
and and get to speak to you on the show.
Straight in with the mocking.
Brilliant.
Yeah, we have, we have been trying to
record this podcast for about six months,
and I feel like you know more about
what's going on in my life than many
of my friends, just by the wide variety
of excuses I've used for why we've had
to cancel and reschedule this podcast.
A never dull moment.
Yeah.
Yeah.
I hope, I hope your life's going smoothly.
What have you been up to lately?
I mean, running a startup is a bit of,
a bit of an all consuming endeavor.
So sadly, I I've probably got, you know,
more to say about that than anything else.
There's a, a bit of background.
I'm the CTO and co-founder of why I'm
not the company, which is based on
why I mock the open source project.
It's been around for ish years now.
And when I'm not dealing
with childcare issues.
That's kind of, you know, the
other thing that I'm doing.
Yeah, brilliant.
Running startups is a whole lot of fun.
Have the reason for this taking so
bloody long and the fact that most of our
listeners might be incredibly surprised to
see this podcast popping up in their feed.
'cause we've had quite a few
long gaps between episodes is
just trying to, trying to run.
Trying to do, trying to work on APAs You
Won't Hate, and Green Turtle, which is
like a software consultancy and protect
Earth, which is like Reforesting, the uk.
It is a bit much, and all
three of them are going well.
It's nice, but oh my God,
can I not handle that?
startups rather than one.
Basically, yeah.
And, and protect Earth is nuts.
But we have just hired a new project
manager, which I'm really excited about.
'cause he basically is taking
over the, the day-to-day of
like finding, you know, finding.
Farmers to plant with and get in the
right trees in the right place and
taking over the day-to-day operations
of that more project based stuff.
And it leaves me to run around being
like, Hey, we should buy that chunk of
land over there and hey, that you come
partner with us, give us some money,
and, and all that sort of other important
business work without the whole, we need
2000 of these trees over there by Tuesday.
It's
So you can be the CEO delegating
all the grunt work to someone.
Yeah, exactly.
It?
Yeah, although we, we all just spend my,
my brain is slightly wrecked from spending
Tuesday, two days ago we were doing a, a
first aid course with a forestry module,
and it's just all of the creative ways
you can like maim and destroy yourself
using a wood chipper or getting ticks
and Lyme disease, which I bloody out.
That was the reason for the last delay.
I'm like, I'm sorry I can't do a podcast.
I've just found eight ticks
in me and I'm ill now.
Life is madness.
But a talking about computers is fun.
Let's, I, I am, I am going mad,
but I do love to hear what's
going on in the API community.
And so
So.
glad to have you on
talking about wire mock.
Wire mock is cool.
So in the world of mocking, mocking is,
is, is a really important topic, I think.
But I think it's one of those words that's
become so completely overloaded that no
one's ever talking about the same thing.
And so.
Just wondering if you can give
us a bit of an intro to kind of
yeah, where, where mocking fits in.
Like mocking appears in testing, but
it also appears in like API design.
And then some people talk about like
sandboxing and, and what's that,
or prototyping and what's that.
So that's quite a, a big topic to just
dive straight into, but could you give
us a bit of a lay of the land on that?
Yeah, sure.
The terminology is pretty tricky and
I think I've probably played my part
in in muddying the waters further
over the years, I'll have to admit.
So with apologies to kind of Martin
Fowler in the, the XP community
generally, who originally came up with
a very specific definition of mocking,
which was a, you know, essentially a
particular type of activity within.
Sort of the creation of
test doubles more broadly.
I guess test doubles were the
overall category, and then mocking
was a, a technique for doing, you
know, behavioral verification.
I think over time the word mocking
has sort of broadened and become
this, this this sort of umbrella
term in itself describe test
doubles further muddying the waters.
Originally it was applying to kind
of mocking in code, so substituting
interfaces in code, whereas.
in the context I'm talking about, and
in the context of wire markets, it's
about APIs over the wire, you know, so
so, so networks APIs rather than kind
of in, you know, in code or in memory.
And I, I think there's a, I think
there's further, further sort of water
muddying aspects to this as well, and
that there's an older generation of tools
referred to as service virtualization.
Which is obviously incredibly confusing
because when, but when you say
virtualization, you tend to think of kind
of hypervisors and that kind of stuff.
so there's an older generation
of tools that do that.
And then, and that now also there's
this term that sort of entered the
mix of API simulation, which sort
of is used by, by newer tools.
And, you know, we use to
describe our product as well.
I, I suppose the.
What I've kind of settled on as my,
the, the way I describe this is a,
is kind of a continuum where mocking
tends to be the, the closest to the
kind of marketing failure definition.
The stuff you do, you're
probably driving it in code.
You are, you're using mocks as essentially
as, as sort of close test doubles in.
Unit or unit like tests.
And you know that it's like code driven
and bundled together, whereas, and also
they, it is probably worth pointing out.
They, they're, they're quite simple.
They tend, they, they're kind
of, know, canned responses,
canned stubs, you know, to, to,
Hmm.
to sort of match requests.
Whereas at the, sort of, at the
other end of this continuum, you have
simulation, which is probably just a,
you know, warming over of, of service
virtualization as a concept where the
thing you've built is the simulation is
much more sophisticated and realistic.
So you have.
You know you know, complex
kind of behavioral variations.
You maybe have the ability to generate
responses dynamically rather than
just sending static responses.
You have the ability to store and
retrieve states pull things from
data sources, that kind of thing.
So, so I, I find that sort of a
use, useful, useful continuum to,
you know, to think of these things.
And I think there are where you sort
of choose to be along with continuum.
Kind of depends on the, you
know, what you're trying to
do, what your, your use case.
Yeah, that makes sense.
I mean, yeah, the the way, the
way that I tried to talk about it.
When it was working on Prism at stoplight,
what, which is another mocking tool that's
no longer really being actively developed,
I don't think the, the, the terminology
we'd used was like, yeah, mocking is
kind of a simple, you make a request and
you're gonna get a response of something.
It may or may not be useful, and there
are ways to like increase the usefulness
and relevance of the generated thing.
I think we, we had like static and dynamic
responses, so static would just be.
This mock is based off the open API
it said there's gonna be a name,
which is a string, and there's gonna
be an email, which is format email.
And looking at that, it
can generate a random name.
Oh no.
The static would just be like,
whatever the example was in the schema.
And the dynamic would be, alright, let's
try and generate a random name using faker
and generate a random email using faker.
So that was at least different,
not the same thing every time.
But there was no like logic there.
And, and, and no persistence there.
And so I think that for me, the
definition of mocking was kind of a
bit dim without mean to offend anyone.
That was the way I, I classed it.
And then, and then sandboxing was like,
oh, and we can, we can put a database,
spin that up in the background so when
you save stuff, we'll put it out and
we'll try and match up the fields if
possible and, and return that back to you.
But that was really blooming hard
when people would do things like, they
would, you know, send data as a raw
JSON object with name and email, and
then they'd return it wrapped in like
a data envelope and then it wouldn't
know, oh, do you have to like do a lot
of weird stuff to try and figure out.
Matching request to responses, even
with a database in the background.
So that was always a bit weird.
And then kind of prototyping was
another term that was like, you
are just building an application
before you build your application.
And you might build a, a, a rubbishly
little application, just fire out
in, in no JS or something with the,
the grand scheme of it being done
in in Java or something later on.
You might just work on something really,
really quickly and then make it later.
But that was always a big
faf 'cause people would just.
Start writing enough code that they'd get
attached and then they just wanna put that
to production, even though they skipped
infinite, there's like massive gaping
holes everywhere and you can just bend
the roof, the future, like, you know, just
was it mass assignment issues all over
the place and all that sort of rubbish.
So it's always been really interesting
trying to make any tools in this
space because whatever you make,
somebody will want the other stuff.
And there are so many different
verticals that this mocking.
This mocking functionality could be
used in so many different parts of
the lifecycle, like with wire mock.
Where, where do you feel like your
tooling, where, where does your
tooling aim to be in terms of like
stage of the lifecycle and, and what
sort of verticals are you going after?
I.
So, so the, the genesis of the
tool is definitely in the of.
You know, the, the mocking for in
the, in the developer's toolkit.
So, so, so the mocking to support,
kind of test driven development,
you know, acceptance testing, test
automation, that kind of thing.
Hmm.
and, you know, so it, it fits in
that kind of you developer in a loop.
You know, they, they, I mean, the open
source tool is very much that sort
of developer in a loop where you are,
you are building functionality and
you are, you are, you know, you are
supporting the construction of your
code with, you know, with mock APIs.
The, you know, our commercial
products is, is kind of deliberately
about broadening that base.
And so it's in, one sense, it's
broadening it in, in terms of the,
you know, an individual organization
software development lifecycle.
So where you want to start
deploying mocks into.
These kind of mixed, integrated
environments, you know, where you want to
be able to do of isolate sets of related
services and test them, or where you
want to be able to kind of deploy, say
a microservice into a real environment.
But you need to, to mock most or all
of its dependencies to, you know, to
be able to, to isolate it and test it
Hmm.
When you, you know, particularly, you
know, I mean, microservice environments
are probably the best example of this.
You know, places where.
You have lots of different teams who are
all building APIs and they're all talking
to each other's APIs and everything's
changing you know, rapidly and somewhat
unpredictably and, and you know, we're
sort of about kind of trying to help
tame some of that, that kind of chaos.
But we do, but we do what we
are also sort of playing in the,
I, I suppose at the top level.
I kind of think of that being,
you know, sort of two, two
kind of top level, I guess.
Types of use case with sort of maybe
even different sort of buyers or
different sets of personas, different
user personas associated with them.
And I think one of them is
the one I just described.
So developers and, and test engineers.
And then the other one is, is kind
of I suppose API people probably
more, you know, the, the maybe
main listeners of this podcast.
So people who are building APIs
for consumption by the public, by
customers, you know, who are you know,
thinking about how to, to make APIs
you know, accessible, well designed.
You know, where their APIs a kind of
competitive asset for their company.
Mm.
And I, and I think that, you
know, mocking gets used in
both of those, those contexts.
But what I've observed so far
actually is that the, the, the tools
get built for one or the other.
And I think,
Right.
Prism.
Prism earlier, I think is an example of a
tool that was optimized for the, the sort
of API or you know, API building audience.
Yeah, for sure.
and I, I think Postman
probably falls into that.
Yeah, postman's mocking functionality
falls into that category as well,
Yeah.
it's about supporting the design
process and supporting the supporting
sort of production of onboarding
materials and documentation for,
for end users and making it easier
for them to, to discover an API.
But like I say, then there's this
whole other sort of category of users
who are doing, who are trying to do
serious kind of qa, you know, dev
and QA activities with, with mocks.
And, and I think, yeah, this, these
two sort of parallel kind of families
of products I think have, have grown
up around those, those two, you know,
quite different sets of use cases.
And I think we're, you know, we're,
we, we're, we're trying to of, you
know, plain in, in both of these,
you know, we're trying to ensure
that they, the, the product gives
you the, the affordances to, yeah.
To, to, to be able to use mocking
to support API documentation to
support the design process, but
also be then deployable into a,
a really demanding QA process.
That's, it's really hard to,
to walk that line, isn't it?
'cause yeah, certainly when working on
Prism I, I was around for the rewrite
from it, turning from a, a go library
into kind of a a, a. Type script based
CLI, and it just, it did everything.
It did it tried to do every single
verse call you could imagine.
And a lot of mock tools get into this.
They, they're like, because they
understand the schema of what's
allowable and they are trying
to cover a few like instances.
And they also have like
traffic coming through it.
You get this like mocking brilliant.
Let's do that.
We've got the, we've
got the open API, let's.
Fire out some some potential endpoints and
requests and responses, but then actual
traffic starts to come through and you
can get into like validation of that.
Like, Hey, are the requests coming
through to this matching the open API?
And so then you, then you actually
get like validation responses,
but then you go, Hey, what?
We can have a proxy where you
pass real traffic through.
And then we could start to say,
well do those, match the, the
requests and responses in open API.
And then.
then.
You can use that for QA and you can
use that for maybe in production,
if you don't mind having the, you
know, the actual critical path.
Go through some random library that may
or may not be particularly performant.
And then you start going into like
learning of well actually there's not,
there's not an implementation here that
we're proxying to, but people are sending
lots of traffic through this and so
let's, let's make an API out of that.
Or maybe there is an implementation that
doesn't have any open API, so let's.
Make funny.
You open the traffic and it starts to
become so confusing that whenever you
have a tool that can do anything, no
one knows what the hell to do with it.
Like, you've got that read me
and you look at it and go, blah,
what, what is all this Then?
And so that was something I did with
Prism was just like, delete a lot of
that functionality from the roadmap with
the intention of bringing it back later.
Simpler.
But it just becomes really complicated
to try and deal with that many different
personas that many different use cases.
Well, and so really open-ended
questions here, but just like,
how do you approach that?
'cause you've got, you've got open
A-P-I-G-R-P-C, all this stuff, and
you've got like testing and, and,
and using it in the design process.
Like, how the heck do you manage
any sort of roadmap and balance of
personas and functionality there?
Yeah, it's, it's difficult.
I mean, we, we experience
all of those problems.
I mean, I, I've, I've been
working on y mark open source
for 14 years now, more or less.
And you know, so there's, there's
a lot in there and, and there's
a lot in there for sort of, lots
of, lots of different people.
And it, and I, I think I probably only
recently have start really seriously
started thinking about, kind of
segmenting the audience up into, you
know, in, into you know, different
user needs and and, and trying to
work out how to focus on those.
But it's, it's a tough balance.
I mean, when, you know, when you're
selling a product in a way that's, that
can be a good thing because it gives
you these, these signals about you know,
people will tell you what they want to
buy and what they don't want to buy.
So you can kind of use, use that as
a, as a sort of as an easy filtering
Just turn that off.
No one's using that.
Don't worry about that.
Yeah, yeah, yeah.
Or likewise, you, you know, there'll
be one thing where you do some, you
know, almost you, you chuck in some sort
of sort of token wave towards it, and
then everyone goes, oh, that's great.
We want more of that.
And suddenly you find that you've pivoted
all your engineering effort to kind of
making this thing really you know, the,
the, the kind of a central feature.
So yeah, having, having customers that
are willing to talk to you is kind of a,
is a really, really useful thing there.
The funny thing about doing open
source is that people although you
get, you know, if you have a healthy
project, you have contributors
coming along, contributing things.
It's sort of, very hard to, you
know, there's a lot of noise in the,
in the, the signal in terms of what
people really need, you know, because
people will show up often with really.
obscure things.
And, but push really hard
to, to get it into the code
Mm,
and you don't really know whether,
you know, is this a, is this a strong
signal that this is something people
really need or is this just kind
of one person kind of really making
a, making a big deal out of it?
I can't believe.
you get people kind of giving thumbs
up on, on GitHub and so on like that.
So I guess it's, you know,
there's, there's some degree
of signal in those things.
But I, I, I certainly, I, I
found running the commercial
products has been a lot more.
Informative in terms of getting
you know, understanding what people
really need it for than, than
running the open source community
for years and years was, I guess,
Yeah, that's interesting.
Yeah.
'cause I mean, yeah, the number of
times I've heard like, oh, I can't
believe you haven't got random feature
you've never heard of in there.
And me and these other three thumbs
agree that it should all work.
But we all work at the same company
and we're the only people that care.
It can be quite hard
to dedicate your time.
risk, isn't it?
Is it they they'll go and get all their
mates to go and stick a tick on it and you
go, oh, this is really popular, isn't it?
And then.
Yeah, it's, it's it's hard to know really.
It's the, you know, the, the,
the perils of small numbers.
Yeah, that's funny.
Long pause, which I'll
hopefully remember to edit out.
'Cause I had a really good question
and a really great segue and then I
focused on doing a stupid bit instead.
Are we talking about here?
So with open API with open
API it's like that's the main
focus for me a lot of the time.
And so I'm super excited
about how you handle that.
'cause I feel like.
I.
Talking about those different personas
where you've got kind of the API
design part and the testing part that
really affects the expectations of
what open API is and how it's used.
And so.
When you are trying to do basic mocking,
and we talked about the ability to
get kind of validation and, and other
behavior and stuff into there, I've
seen quite a lot of people requesting
the functionality or suggesting the
functionality or working on building
the functionality into their tools.
Where the open API document
starts to get all of these.
X extensions where they kind of litter the
open API with starting off with littering
it with like Xen faker to let it know
that this should be a randomly generated
IP address that seems pretty harmless.
But then it starts getting into when
these three fields come in, they
should be multiplied together and
a tax rate should be put on top.
Depending on the country and like, and
all this really complicated behavior
that you can't, you can't really infer
just from looking at a standard open
API, it means someone's gotta start.
Building that somewhere.
And either you start wedging all of
that into the open APIs extensions,
or you start littering the GitHub
repo with like random JavaScript
files that, that handle some of it.
And you kind of reference those somehow.
And whatever it is you end up
doing, there's the assumption
that you can, and, and I'll.
Interested in changing the open
API and obviously you support
GRPC and everything else.
My experience is more open api.
So I ask like how, how do you
handle mocking behavior in wire mock
without just literal an open api?
Or is that what you do?
So so yeah, so we do support a
little bit of the, the littering
through vendor extensions, but it's,
it's not something we've kind of
leaned into particularly heavily.
I mean, yeah, this is a, this
is genuinely a hard problem.
This is something I, I, I end
up talking about quite a lot.
The, you know, open API, you know,
in its, you know, unextended form
anyway, is, is describing you know,
a, a certain aspects of an API.
It's not a complete description of
an API's you know, constraints or
business rules or, you know what I mean?
There are all sorts dimensions about a,
a way an a p will will operate that that
you, you can't capture with, with open
a necessarily when you, when you take.
The fidelity of the mock is gonna
be limited by the you know, what
can be expressed in, in open
API in the, in the first place.
Hmm.
actually this kind of relates back
to this use case, this thing in the
first place because I think there,
there are a few tools out there that,
that make it very easy to get from
open API to sort of something re, you
know, a credible mock of some kind.
But for the and like I say, if you're,
if you're doing this sort of bringing
documentation to life thing, then,
then that's probably all you need.
And that's great, you know?
You just wanna have a
little for, for that?
Bringing documentation to life.
I feel like the use case is just, we've
got documentation, which is a list of
endpoints with some sample requests,
some sample responses, and if you click
the try it button, then you can see the
sample response in a different format.
And you could even potentially copy
this URL or copy this curl command and
do it in your terminal so that instead
of looking at it through the docs,
you're looking at it in your favorite.
Editor, but that's
about as far as it goes.
Right?
That's just a like, poke, poke
this and get something back.
And that's, that's it.
exactly.
Exactly.
And the something you get back is,
is, is, is is very limited, you know?
So yeah, know there are all sorts of
fa you know, fairly trivial things that
you can't really express very well.
It's like, yeah.
Lots of APIs will say, you know,
if you give one, one combination of
request parameters, then you'll get
a, a response of one shake back.
Whereas if you, if you give a different
combination, you'll, you'll get a
sort of slightly different response
the.
and you, you can't describe
the relationship between that
pattern of parameters and,
and that type of response in.
In open API for instance, you
know, you can't express, you
know, even trivial business rules.
I mean, the, the, the, the demo I give
actually a lot is, is the the, the train
travel API that the, the open API that you
Yeah.
I think.
There in the so you can, you
can load that into wire mock
and it will generate a mock API.
And then we've got a little
demo app that's like a, yeah.
Simple train booking app that uses
it and it, and it kind of works,
but you can see very quickly that,
for instance, you can put in.
A search date that's in the
past and it will happily give
you back results, whereas
Right.
do that in a real system.
So you, you want to be able to
kind of express this rule that this
date has got to be in the future,
Hmm.
but that's not something
you can say in open API.
so, the, you know, the approach that
we take and it's, it's a, a, it's an
imperfect one, but it's one that you
know, one that works better than, than
just doing the sort of vanilla, you
know, one way translation is to, to kind
of say that you, the Open API and the
mock are, are both partial descriptions
of the API and they both describe
sort of slightly different aspects.
And so between them, you know,
you get more than either one of
them can provide on their own.
So we.
We maintain both side by side.
You can generate in both directions.
So you can as I've already said,
you can, you can upload an, an open
API or you can, you can, you know,
incrementally edit one and we'll, we'll
generate and amend your stubs for you.
You can also go the other way so you can,
I mean, and this is probably another,
a whole other conversation, but you can
prototype by kind of usings as examples
and generating the open API from those.
Oh, cool.
Yeah.
But the, you know, the important thing
is a common pattern, and this is you,
this is the one I demonstrate sometimes
with the train travel app, is to
think of the, the o the thing you've,
you've built from an open API point
as import as a, as a starting point.
And then you're kind of layering on,
you know, the, the sort of behavioral
richness, you know, to the, to the
extent that you need for your use case.
And you know, it might be that you
just want to, you know, just for your.
Your API portal users, you want to be
able to give them several variations
of requests that they can make.
You know, you want the you know, the
paginated, non paginated versions,
you know, maybe you want the yeah.
To be able to retrieve an invoice
in several different states, you
know, with, with you know, different
combinations of data rather than
just the, the default one, you
know, all of these kind of things.
So, so you might wanna just do that and
layer on a little bit of richness in a way
that's tricky to do, purely from open API.
Or you might want to go quite a lot
further and, and model you know unhappy
cases, false stateful, all this kind of
thing in the way that you need, if you're
doing sort of serious QA activities, and
particularly if you're using it in a like
I say, the, the, the, what I, what I've
tended to find is that the, the sort of
broader the test that you're running.
So the more the, the more of the
system that it, it encompasses.
and thus, you know, the sort
of , the broader the scope of a, of
an individual test case, the more
realistic the mocks need to be.
Yeah, I gotcha.
you end up being down.
Yeah.
If you've got a, if you've got say,
you know, collection five microservices
that are collaborating, and you kind
of built an outer shell of mocks around
that so that you are, you know, you, you
don't have to spin up your, your other
500 microservices just to make it work.
Yeah.
Yeah.
Gotta wait for Kubernetes to load
just so I can test this one endpoint.
And you, I mean, it's amazing
the extent to which you see
organizations trying to do that.
yeah.
Being
do you know, coming up with
there,
really like co complex kind of routing
systems that, that let you combine.
I
I.
these are sort of remote systems
that have, you know, combined local
and and remote services for testing.
And I'm sure some people make it work
very well, but it strikes me as being you
know, being a sort of maddening endeavor.
But anyway, , the point I was
trying to make is that if you're,
if you're doing that, then.
You, you know, you might sort of be
testing a single operation from sort
of an end user's point of view or
a caller of one API that actually
Hmm.
out into, you know, this sort of
multi-step kind of stateful interaction.
And generally, if you want to, if you want
to be able to do that kind of test then.
You know, the mocks need to be you
know, they often need to be stateful,
or at least they need to be sort of
pseudo stateful and, and model that
behavior a bit more realistically,
um, than the, the sort of sort
of stateless, canned response
you can get away with if you're
doing like a narrow unit test.
Talking about that kind of behavioral
functionality for the different
personas, I think might kind
of break it down for folks that
haven't thought about this too much.
So if you are working on that
kind of API design or kind of
documentation documentation like Dev X.
Side of things, then it might be really
helpful for Yeah, it, it's helpful for you
to have some behavior in the mock where
basically if you think of the, like stripe
example where they used to say, if you
post post a payment with a credit card of
like 4, 4, 4, 4, 4, then it will succeed.
If it's like 4, 2, 4, 2, 4, 2,
then it'll fail, whatever that was.
You can, you can get different
responses, come back.
From, from the request.
And that can be just a little bit
of looking at that and, and doing
a rule to give you a response.
Now, it doesn't necessarily have
to kind of persist that data to a
point where you are making some other
requests to a list of previous payment
attempts, and that one will show up.
That would also be useful,
but in the kind of API design.
Use case that's just literally, we've
got, we, we are trying to build the API
client and the API itself in tandem.
And we would like the kind of
iOS developer, web developer to
verify that this will work at all.
They can make requests in the right
sort of datas in the right place.
It might be the wrong person's bloody
name when they get the response,
but like, fundamentally, does it at
all kick in the tires help at all?
And once they say yes, you can then
quickly rush off and write some of that.
Code and then kind of build the
prototype and they can get that
back and go, oh, that works lovely.
Thank you very much.
And you can go and tidy it up and
turn it up and use the open API as
the contract test to verify that
you haven't changed it since then.
So that's, that's really helpful
for mocking in the kind of API
design world, but it absolutely
useless for most sort of qa.
Functionality where you're like
you know, do, does a webhook
fire off if I send a failed?
You know, does, does the failed webhook
send off if I send a this credit card or
if something else go off, if I do that
one where you need it to be more realistic
and have more behavior in there, and
maybe talking about webhooks as a step
too far, but like, you know what I mean?
Yeah.
No, we do WebBook as well.
They're very popular feature.
Yeah, I think more people use them
than, than admit to it, it seems.
But could you, could you mock a open
API and have it sending off like
mock webhooks in certain situations?
so actually we don, we, the converter
doesn't convert to our web hooks
automatically.
I'm ashamed to say at the moment.
I mean, we, it is one of those things
that's on, been on the roadmap for a
while, but it's, you know, it's just
been less important than other things.
So we haven't done it.
There's no
Mm-hmm.
I mean, , the web hook definitions
you can define in WireMock are
not identical syntactically,
they're capable of accommodating
what you can describe in in open API.
So
That's cool.
No, what
whenever anyone mentions anything,
I'm like, oh, what about this?
Like you, I, I will always have an idea
on top of what it is you currently do.
I'll be like, but that's great.
But what about that?
Oh, sorry.
We haven't done that yet.
No worries.
I think.
Just to, pull on a thread.
You started a moment ago.
The prototyping these really is, is
really interesting and, and actually
may, it's may be a slightly sort of
controversial type, but I dunno, maybe
one you'd be sympathetic towards.
I so we, we, we sort of promote the
idea of using mox as prototypes.
Hmm.
I, I think, again, depending on which
of these two personas you're in, you
know, that it, it's a kind of similar
activity, but sort of with different.
Different terminology
and , different emphasis.
So some of our first customers actually,
the reason they, they, they they bought
the product was because they had, you
know, backend teams building APIs and
then they had a bunch of different
front end teams like, you know,
mobile and web and so on like that.
And they were frustrated at
having to, you know, work in
this very serialized way because,
Hmm.
know, they would.
Come up with, yeah, a new feature would
need a new, new bit of API and they would
sit and whiteboard it and figure out
what they, they thought it should do.
Six months later.
would go off and build something and
Yeah.
a test environment a few weeks later.
And only then could the, the, the teams
calling it really kind of get going.
Building stuff, and then there'd be this
sort of slightly long, painful feedback
loop where, oh, actually we forgot all
of this stuff in the design session.
And you know, now the backend
team have to go and remediate it.
And the, know, use, using Mark
kind of solves two problems there.
You know, one, one, it means that
you can, you can very quickly
have something that the front end
teams can start building against.
So they're not waiting for, for the
backend teams to, To ship something, all
but by, by doing that, it gives, it gives
the opportunity for, for early feedback
Yeah.
It means that they, you know,
by Yeah, because this is the
thing you, you don't really know.
Until you've actually built something
against an API, whether it, whether,
I mean, I know, I know there's lots of
other, you know, there, there's governance
tooling and there are lots of sort of
heuristics you can apply to APIs to,
to, to say whether they're good or not.
But it's like, you know, sort
of a, you know, like a, an ounce
of practice is worth a ton of
theory in, in, in this case.
You know, you try and if you try
and build something against an
API, then, then you'll really know.
it's fit for purpose, if
this design is any good.
And so the earlier you can do that,
and, and, and also, and, and this is
the crucial point I was gonna make.
The, the, the, the more sort of
breadth of behavior of that API
you can cover in that exercise.
The, the quicker you are gonna
converge on an API design, which
is which is really suitable and,
Hmm.
You know, the, the, I I think what,
you know, one, one of the, the, the
downsides of using a the tools that
sort of do, do you like a one shot open
API to mock generation is that you,
you lose the ability to go and exercise
all those kind of secondary parts.
So I'd say particularly if you have a
a very kinda state heavy API, you know,
one where the response you get back,
you know, the, the, the structure of the
payload for instance, varies quite a lot
depending on the state of the resource.
That you've retrieved then yeah.
That's the sort of thing that, that
doesn't, you know, it, it doesn't really,
it kind of gets lost in all of that.
You know, you get the one
representative response back, which
is probably just gonna be like
one state outta the set, you know?
And, and if you're using one of those
tools and if you can actually, yeah.
If you have a, a tooling that
lets you, you model that in a
slightly more sophisticated way.
So that you can get through all of that
and you can cr you know, you can raise the
level of realism, of the experience that
the consumer gets, then you're, you're,
you're much more likely to, to produce
a a better design more quickly that you
know, that you can then ship something
with, rather than having to fi Yeah.
Again, getting into that kind of, you
know, shipping stuff into a production
like environment and testing it,
and only then realizing that, that
actually, you know, when, when we,
when we get into the real staple
version of this a p, it's not what we.
Yeah, absolutely.
I feel like the, the, the key thing
is to make sure that it's not.
Somebody asked for functionality, they get
told to go and look at their shoes for six
months until we finished building it all.
We had to rush because people
were complaining at us.
So we built some rubbish and
we told ourselves it would be a
prototype, but actually we've ended
up just shipping that live and it
doesn't really do what they said.
'cause the whole six month
wait with no feedback.
And that's just a mess like that.
That's basically like default in API.
That's what everyone does.
That's just what they do.
And it's why I've spent so much
time being like API design first is
good, actually you should try this.
'Cause it sounds like just being
the nerd that was like, you have
the decade that everyone was running
around, like the religion of TDD or
the religion of anything else, right.
Of just, Hey, we've got this wacky
idea that doesn't really make a lot of
sense and it massively slows you down.
Or like yeah, like domain driven design
for everything all the time and just.
It.
It kind of sounds like the sort of
thing that similar, those similar types
of people would say, but it genuinely
saves everyone a whole crap load of
time if you just actually sit down,
talk about what you think you might
want, and then make a mock for it.
And that's step one is good for everyone.
Whatever mocking tool you use, probably
you can do that step one where you're
like, Hey, here's, here's a thing.
The Prism classic example was you using
the pet dog 'cause it was pre-trained
travel API, and it's burned into my brain.
But the example of you post a cat and
you get a dog as a response, right?
Tech technically fine, technically valid.
Like you, you send it a request
and you get something back
and you can do the most basic.
I have built an interface that when
I send this off, I will then get a
response and I can build something.
But if you want to get a a,
a cat back instead of a dog.
You have to start adding in
these weird little switches.
And I think we did something like using
the prefer header where you kind of
add a HTP you start messing with what's
real and what's the, what's the thing?
that's, that's always been one of my
kind of cardinal design principles
actually is ne never do that.
You know, the,
Yeah, yeah.
caller should never have to change
its behavior in order to to interact
with a mock versus a real thing.
Yeah.
you know, the, you, you should always
try and try and find ways of using the.
Yeah, the signals that are sort of
intrinsic to the, the call that's being
made to you know, to, to affect behavior.
Even if that's kind of harder to do in,
For sure.
Yeah.
'cause I think all, all of our like all
of prism's validation responses, if you
send a request and it's invalid, you
would get like RFC 7, 8, 0 7 or whatever.
There's a new one now.
But you would get that kind
of standard error response.
And then under the hood, I think that
was, I, I did suggest that to all,
and I was thinking, well, with my mole
on of, of course, everyone's using
the standard for error responses or.
And, and e even if I didn't truly
believe that, I'm like, but they should.
So, you know, fuck 'em, it's a problem.
But I had, we had to do something
of just like, there has been
an issue, the response, the
validation response wasn't defined.
Let's take a guess at this.
And then they actually go ahead
and implement the error format.
And it's just like.
A simple string or a text
or HDML or something.
And so again, you, you kind of
end up blurring the lines and all
of the line blurring just always
felt weird, but it's, it's hard to
get functionality out of a mock.
And so I think we, we skipped from basic
dumb mocking with that little bit of
faker to come up with some, some, well,
whatever's an email address shut up.
We skipped straight from
mocking through the proxy.
We had functionality that was.
You could proxy to an actual
implementation, which would
if you returned a 5 0 1 not
implemented, would return the mock.
Whereas if you returned anything
else, it would return the real one.
And so that kind of blurred
the line between mock and
prototype and beta really.
'cause what we were basically saying was.
Instead of launching into a prototype
where you're trying to get that fast
feedback, but actually it will still
take a while to make this prototype.
And actually you'll probably end up
getting attached to it and trying
to put it into production instead
of that awkward jack of all trades.
Not very helpful.
Middle of the road prototype first
approach, we were saying step one,
kick the tires with this kind of
shy mock and then you can use that
feedback to quickly go and implement
endpoint at a time over here.
And it can be a slightly better kind of.
Build, which you can start to put
live piece at a time as and, and so
instead of the entire API as waterfall,
bits of it were slightly more agile.
And they could, they could use
the mock for something else.
Thing.
I mean, it's slightly different approach
to it, but you know, with wok you can,
you can set up a proxy stub that says,
when these criteria are matched, you
go and proxy off to this other address
I
than
Nice.
aub.
So typically what you do is you, you
have one of those that's very loose,
matching as a, as at a low priority, and
then you stack a load of specific stubs
on top of that for the things that you.
And essentially there's two use
cases, either like a load of
things you haven't built yet,
so they only exist in stub form.
Or there's because the other,
the other funding you do with
wok is really low level faults.
You know, you can like send reset
packets and drop the connection
and all that kind of thing.
Oh, cool.
and, and you know, based
on matching criteria.
So, so if you want to use it as a
kind of fault injection engine or two.
To inject unusual responses that you can't
make the real test environment make, then
you can just kind of layer those on top.
So, so what WMA will do is that, you know,
request comes in, it'll run down the list
and say, does this request match anything?
If yes, oh, okay, this is the go
and send a reset packet response.
So I'll do that.
If not, it runs all the way down
the list, gets to the proxy and
says, okay, I'll go to the test
environment and serve a real test,
That's good.
So you can kind of skip needing to
set up proxy, proxy or similar as
a, as a separate tool and just say
like, if, if this request comes from
Dave, give him a really bad day and
it'll be a really slow response.
Or, or like anyone, or you could literally
say like, if this if, if this is coming
from you could pretend like certain
country is getting a very slow response.
Does it, does it work?
Realistically well with the client if
the responses are that slow or do we need
to go deploy another server over there?
That's really helpful.
Yeah.
No, it's very.
Powerful way of using it.
And as I say, whe whether another kind of
sub use case of the QA thing, you know,
is the kind of non-functional you know,
the, the possibility for non-functional
testing that mocking opens up.
You know, you can you
can go and you can run
I.
tests and soap tests and you know, things
that are, that are about performance
and stability, but you can create a
set of environmental conditions that
you are, are really hard to kind of
make real systems create on demand.
Yeah.
Yeah.
Quite a few years ago I was working at a,
a big retailer in, in the uk and, and they
had the sort of the, you know, the same
kind of Black Friday problems that every,
all you all retailers did, you know, where
all of their customers would show up on
one day and they would, you know, hammer
the hell out of their site and all the
payment gateways would be running, you
know, taking four times as long to respond
Mm,
they're all getting hammered as well.
And
Immensely stressful for
Yeah.
involved if you're a techie,
but the, the nice thing about
obviously Yeah, the, yeah.
The problem you have when you are
testing in the run up to Black Friday
is, it's not currently Black Friday, you
Yeah.
your, your systems are, are
not under load in the same way.
And you know, the, the third
party system you depend on
aren't under load and they're not
Mm.
weirdly and all that kind of thing.
You can very easily deceive yourself.
And I, I mean, I've lost count at the
number of times I've seen this happen.
You know, where organizations have
done what they think is really
extensive performance testing in
ideal environmental conditions.
And then, and then the big day comes up
when, when everything's going to hell and.
Something that they didn't test for
or predict happens because, you know,
because some crucial aspects of the
Yeah.
The great thing about, about all
this about using mocking tools that
have these kind of non-functional
features is that you can deliberately
degrade aspects in a very controlled
way of the, the environment and
Mm.
out ahead of time what's gonna happen.
That's brilliant.
Yeah, I mean, that was for my,
my two years at WeWork of just
dealing with the most insane
problems of systems architecture.
Where there was a favorite example
of all is that one where there was
a third party, API, that everyone
forgot about that handled like.
Adding new key cards for new
members of the coworking space.
And that was handled by some random
company who had a SLA promising it
would be always less than a second,
never more to get a response.
And we had, I think there was
like, I. West coast, east coast and
other, which was fine when other
was like Australia and London.
And then eventually it was like China
and Russia and Brazil and every single
other major country you can think of
all on this one TV little like EC2
instance floating about somewhere.
And yeah, every time Australia had a
busy day on the first of the month when
all their customers turned up to, to
check in for their new contract, no one
in Los Angeles could go to the bathroom
because the key cards wouldn't work.
And things like that of just like,
well, we've tested all of our APIs.
It's like, right, but you did forget to
test that other dependency that you had.
And so yeah, you could just like save
a couple of requests and responses.
You don't actually care what
the actual thing coming back is.
Exactly.
You just want it to cut, you know, when
it does come back, it's a valid response.
That's fine.
But you want to test what happens if
that takes two minutes to respond.
Like it was it was like two minutes.
That's mad.
That's so slow that we've actually
kind of rejected it from our logging
system because they, they said
it would be less than a second.
So two minutes must be the
logging system's broken.
So you need to be able to, you need
to be able to test things like that.
I've spent a lot of time doing
this kind of performance and
resilience engineering, those runtime
characteristics , the systems you're
building and, and in particular with
respect to APIs is is really important.
You know, the difference between
something taking, you know, half a
second to respond versus two seconds
in the context of a busy system,
Yeah,
is, is absolutely enormous.
And and you know, it can mean
the difference between stability
and instability fundamentally.
for sure.
There are techniques like sort of
chaos engineering that you can sort
of bring, bring to bear in order to
try and make sense of these things.
But I, I think, suppose with that in
particular, no, no criticism of it at all.
I think there are probably you know,
if you're trying to build really highly
scale, highly available systems, then.
Then it's an essential technique, but
the, the problem is, it is a, there's
quite a big barrier to even, you know I,
I think starting doing that kind of thing.
so if you can deploy these little tactics
where, I mean, I, I often wonder how many,
how many organizations have run kale?
Experiments,
Yeah.
know, these big expensive things where
you have whole teams of people involved
and you have to have done all of this kind
of DevOps maturity work to even get to
the point where it's safe to, to do it.
And then all you find is like someone's
set a time out too high on, on some API
Yeah.
Yeah.
kind of think, you know, fine, you know,
it's good that you found it, but maybe you
could have found that in a. Much earlier
in the process and much more cheekly
and, you know, with much, much less kind
of labor involved than, than this way.
And,
Yeah.
so I, I kind of like to think
about how, you know, how you can,
how you can kind of surface those
things as, as early as possible.
You know, it's kind of, it's,
it's like with the prototyping
thing really, I think you,
the, a lot of this is about, about sort
of, I mean, I, you know, I say I, I,
I admit, I, I was a, a sort of fairly
fanatical XP adopter back in the day.
I do still have this fairly, sort
of strong belief and bias towards
the idea that if you can make
feedback easy and early, you'll
end up with a, a better quality
product being produced more quickly.
Both of these examples, the prototyping
example and the, the fault injection.
If you make something that's sort
of trivially easy for someone to, to
discover the sort of correctness and
properties of, and so like that, then
they'll do it and they'll, you know,
it'll get built into people's routines.
Whereas if you make it happen late
and, and it's expensive and requires
lots of inve investment and upskilling,
then it, it probably won't happen.
There's a sort of , an example I heard
from a friend about this recently
where, i, you know, prototyping
API prototyping wasn't being used.
So he works in one of the large banks, you
know, and they have obviously, you know,
huge sprawling kind of API integrated
systems of, of various generations.
And they, they were, there was a,
a team he was involved with that
were working on an API that sort of
fronted, you know, this sort of layer
upon layer of, of legacy technology,
you know, all the way back to kind
of bits of mainframe, I think.
And they.
They, you know, they've done this.
Yeah.
The, what I described earlier,
this kind of whiteboard exercise
of sketching out what this API
needed to do in order to build this
new product they wanted to launch.
And then, you know, they did all of
that months and months of work happened.
You know, a first version of this, a
p was shipped into a test environment
and then someone immediately went we're
missing that ID field that we need
in order to be able to do any of this
Yeah.
Yeah.
and, and, and this resulted
in a three month project
What.
five teams or something
like that in order to.
You know, to be able to go and wire
everything up to, to surface this,
this one additional I Id field.
Oof.
and I, you know, people, people
doing like greenfield microservices
kind of go, oh, it's easy to
change things, it's no problem.
But,
Yeah.
Yeah, I suppose the other thing I
always like to point out is that a
lot of us are not in that situation.
A lot of it's build APIs as
modernizing facades over very
hard to change technology.
And so the, the, the cost of
getting it wrong is enormous.
Yeah, that's rough.
Good job from pulling us back from the
brink from me, about to launch into
a tirade about how at this point in
late stage capitalism with the finance
bros taking over the world of tech,
we are currently actively firing.
Every single engineer that can't
be replaced with copilot instead of
investing in things like a group of
10 people whose whole job it is, is
to sit around and fuck things up.
On the off chance there's
an issue they can discover.
Well, that's probably another
topic for another day.
But on the topic of ai, the last thing
I wanted to, the only thing I really
wanna mention about that is that
I've seen a few companies starting to
do like AI mocking and I feel like.
How do you feel about that?
Is that something you
have on your roadmap?
I know you have MCP, that's a topic
for another podcast and it makes
sense for you to support all the
protocols that you possibly can.
But for AI responses we've talked
about, like static and dynamic,
we've talked about faker, we've
talked about kind of persisting
data and trying to wire it back up.
We've talked about trying to glue together
kind of behavioral stuff in a, in a
different mock, in a different format.
How do you feel like AI can help that, or
do you feel like it's a silly distraction,
hype, bandwagon, nonsense thing?
So, so I, I think it
can actually be helpful.
I agree that the, you know, with ai, the,
the, the hype often runs, you know, a
bit ahead of, of you know, the real, real
I.
of it.
But but nonetheless, actually, I
think this is a, a domain where AI can
you know, can be genuinely helpful.
So we one, one of the things we see a
lot, you know, organizations that, that,
that use mocks kind of really extensively.
they, you know, traditionally, you know,
you know, particularly kind of, obviously
the last three years trying to build
a, a product, which, which makes lot,
lots of aspects of this easier, but sort
of prior to that where people, we, we
were kind of cobbling together mocking
solutions out of open source tools.
just a lot of toil involved.
You know, that if you're, you know,
if you're in a, a microservices
environment and you've got 500 or a
thousand services in your organization
and they're constantly changing
and everybody, everybody who calls
'em has built their own set of
mocks for everyone else's service.
You know, you've got
Hmm.
Sort of combinatorial explosion of mocks
being created and maintained, and and
it's just a lot of toil, you know, to, to
maintain them, to make sure that you are,
you are testing against mocks that are.
Are true to, the real thing
that you're gonna call.
PE people, , quite rightly
don't really wanna do it.
They wanna focus on their actual job
and they, they want the, the amount
of effort they're devoting to that
once they've got over the initial
sort of novelty of like, Hey, this
is this fun, new thing to play with.
Hmm.
And it just becomes, becomes a, a, you
know, another maintenance headache.
So I, I think it can really genuinely
reduce you know, reduce that toil.
So I'll give you a few specific
examples of things that we're,
we're working on and that we're
experimenting with at the moment.
So, so one of them is so like I
say this, this, this big problem
of drift, you know, where real APIs
change mocks don't get updated.
Tests are being run against
you know, out date mocks and,
and they become untrustworthy.
You get false positives and then
failures happening further down
the line, which people don't like.
So ideally you want to be able to, to
automate the, you know, the sort of
detection and, and remediation of those
mocks so that you know, those, you know,
everything stays aligned and, and you can,
you know, the, the, your, your level of
trust is maintained in your, your tests.
So the, you know, one thing a,
a AI is good at is, is is is
kind of, you know, poking around
and experimenting with things.
And and I I, I find it particularly
good when you pair it with, sort of
non-AI techniques for, for, you know,
validating things, for instance.
Okay.
AI plus Open API actually is a, is a, a
really, a really powerful combination.
So we, we have a, we also have a,
a validator built into the product.
So what you can do is, can, you can via
our MCP tool, you can say, say to an ai,
go and pull the latest open API for this,
you know, this API that we're mocking.
Hmm.
and now I want you to, you know,
so, so import it into yox so that
we, we've got the latest, the latest
open API now far off a bunch of test
requests to, to everything in there.
And if there are any validation errors,
go and update the stubs in a way that
that corrects for them, and then give
me a report on what you've changed.
Okay.
Yeah, so kinda using it for fuzz
testing on somebody else's API based
on their, their open API definition
You fuzz, test the heck out of that
and then see what's, what's broke.
Yeah.
well, it's, it's kind of, it's
kind of this closed feedback loop.
I mean, obviously the, you know, AI's
hallucinate a lot and if you just
say to them, here's a, here's a new
open a p, I go, go and fix my stuff.
Sometimes it'll get it right and
sometimes it'll get it massively wrong.
So
Yeah.
losing a lot of the,
the productivity benefit
you are asking it to hallucinate though,
in that instance you're saying like,
Hey, can you go and hallucinate a
bunch of requests that may or may not
match at the API fire those all off.
That's literally what QA is.
That's like creatively coming up with
sometimes right, sometimes wrong,
sometimes absolute bullshit requests.
And if you're getting the AI to
be creatively stupid, that is is a
perfect use case for it, in my opinion.
Yeah.
Well this, I mean, the other, the
other thing I'm playing around with
is this idea of an API crawler.
I mean it, yeah, it pretty much exactly
what you described at the moment.
When, I mean maybe it's QA people
or devs or anybody that who
needs to interact with an API.
There's this process they often
end up following, which is sort of.
They'll get a little bit
of information about it.
You know, it might be, you know, just
some kind of, some folk knowledge from
somebody else in the organization.
It might be you know, a really
sketchy out of date swagger document.
It might be you know, sort of a
half-baked postman collection or a
Mm.
doc or whatever, you know,
do you know what I mean?
You've got some,
Yeah.
so something to start with, but, but it
doesn't really tell you the full story.
So you fire up an HDDP client and you.
You get yourself an API key and you
go in there and you start kind of
poking around and making requests and
Nice.
you know, via trial and error, you
build up much more full picture of
how that API works until you can do
whatever it is you need to do with it.
You know, whether it's testing it or
whether it's an integrating something
with it as a developer, and yeah,
that, that seems to me to be a very,
very automatable process as well.
You
Hmm.
and, and, and this is what I'm
playing around with saying, Hey,
here's a, here's a doc with it.
You know, it's got a bunch of like
URLs in it that you can go and call
and I'll give you an an API key.
Try and work your way through this.
API follow any links you find, you know,
try posting payloads that sort of look
similar to the ones you've read back.
Yeah.
Yeah.
picture of what's, you know,
what operations are available,
how you post them, and so on.
And I. We've sort of rigged
that up, you know, essentially
uses our prototyping mechanism.
So, so you can, as it's doing it, it will
feed things into a recording that then
become a, a, a, a mock from, you know,
that's generated from that exploratory
session that you've asked the AI to do.
And then that in turn
produces an open API, so
Okay.
I like that.
you know, do you know what I mean?
It's like, like sort of that, that again,
this, this kind of grunt work of of
just going around and poking everything.
I'm trying to make sense of it
Yeah, for sure.
A useful live relevant piece
of documentation and knowledge
about this API, you know, you're,
you're kind of asking the AI to do
that, and, but I say because it's
ultimately making requests that are,
they will either succeed or fail.
You know, there's no, it's not like
the the AI hallucinating is going to
degrade the quality of the output in
that instance because the, you know,
you're talking to a real API that
will, you know, will either accept your
request and return you something real,
or it will just say, no, that's wrong.
I'm not accepting that.
Yeah.
Okay.
you know,
I.
of those are sort of you know,
like VA validating information
that, that that sort of avoids,
avoids things being hallucinated.
So I think, I
That's really smart.
sort of thing's really useful.
It's a lot more useful than
like having a shared mock that
you give to all of your users.
And then like the main user profile
that comes back is like D'S nuts or like
Hitler, none of which is very helpful.
Anyway, we have gone long and that
is my fault for bringing up the topic
of AI at minute 47 of the recording.
But I would just like to thank you
for your time and for having a really
interesting chat with me about mocking.
Before you go, how can people find you and
your awesome stuff that you're working on?
So I'm on LinkedIn, Tom Hurst.
The company is yoc and you also, there's
yoc.org is the, the open source website.
And there's like links to Slack
community and everything else on there.
Lovely.
We'll sprinkle all those
in on the footnotes.
Once again, thank you very much and
cheers for listening in everybody.