What are the big 3 types of tests we use on our projects? How do we decide which to use? Listen in as we discuss this and more.
Two seasoned salty programming veterans talk best practices based on years of working with Laravel SaaS teams.
Joel Clermont (00:00):
Welcome to No Compromises, a peek into the mind of two old web devs who have seen some things. This is Joel.
Aaron Saray (00:08):
And this is Aaron.
Joel Clermont (00:16):
Aaron, one of the things you and I agree quite strongly on is the value of testing in our software. So I thought today we could kind of take a high-level approach to the different types of testing that's available in Laravel and how we choose what sort of test to write in a given situation, kind of just overall strategies.
Aaron Saray (00:39):
Yeah, there are a lot of different types of tests.
Joel Clermont (00:42):
I thought to kind of narrow it down for our conversation, we could identify basically three types of tests. At the one end of the spectrum, we have what I'll call unit tests. Just to be clear on the definition of that in our case, we agree that a unit test is something that does not have any other dependencies in the system. Things like a database or things that will make an HTTP call and so on. It's literally running a function and getting a result. Second type these might be called different things, integration test, feature test. But it's a step up from there, you're still within the confines of your framework, but now you're sort of executing more of the system. You're hitting the database, you're making a request that's coming into the framework and generating a response.
You might be even interacting with things like caches or other things like that. So that's sort of the middle ground. The really really full feature test, I guess if you want to characterize it that way, would be something like Dusk. An end-to-end test that is literally driving a browser just like a user would drive a browser and interacting externally from your framework code and testing it that way. Let me just ask you first, do you agree with my definitions? Are we ON the same page there? Any clarifications you want to throw out there?
Aaron Saray (02:06):
Yeah. I mean, sometimes I tend to make a little bit of a clarification between an integration and a feature test but those are still lumped together. I like to think of integration as something that integrates with one to many things and maybe a feature is one result of user input and then what they'd expect for output. I think that is slightly different than those end-to-end tests, because you're taking away one variable, like a browser. Whereas, the end-to-end test with something like Dusk you're actually interacting with how the browser might send data as well too. But they're all just different level, but I agree in general with most of what you're saying.
Joel Clermont (02:46):
Yeah. Some of these definitions can be a little squishy depending on how your team does it or even if you're coming from a different background, different language, different test suite set up. Those things can vary, those terminology. Let's get to the heart of the matter then. We have all these different available types of tests we can write. What would lead you toward writing one type of test versus another type of test? Do you have any internal rules that guide that decision?
Aaron Saray (03:16):
Joel Clermont (03:40):
Aaron Saray (03:40):
I like to look for those patterns or understand a deeper reason why someone's saying something. I think what it started to come from is, I saw that in general, and this is a generalization, that some of those feelings and opinions were based off of how they were programming or what they were programming in or on or around. That is to say a lot of the backend developers were working with data and business decisions, and data structures and processes. So an idea like a unit test where we took the smallest amount of a decision and tested that decision through all of its permutations was particularly useful and exciting. The frontend developers at the time were working a lot with the browser and they didn't really care what was happening in the workflow, they cared what the user would experience.
So they would want to use an end-to-end type of too, or at least a frontend sort of exercising the browser and then receiving responses from maybe mocked endpoints or something. Because they were programming in something where that whole area was the most important part. After listening to that, I realized that they're both right and neither is right.
Joel Clermont (04:57):
Aaron Saray (04:57):
Just like most things in programming, right?
Joel Clermont (05:00):
Aaron Saray (05:01):
I tend to now take that mentality and apply that to what particular part of the process am I working on? I can give a couple of solid examples. If I'm working on something which is maybe data driven or decision or workflow, I'm probably going to reach more for unit tests and execute those workflow [inaudible 00:05:27]. Or, if I'm writing reports for example, I'm going to spend a lot more time with the data and to the smallest area I can sort of unitize to get the different permutations of what my report will be. Because when you're working on a report there's no difference in how the user is sending it in, they're going to just send in some criteria. There's nothing that exciting about that, there's nothing unique about that. They're going to fill in some forms and expect some magic to happen in a data realm and then they'll get some responses. We want to focus on what's most important part there which is that area.
On the flip side, if I'm going to work with something more complicated, like maybe merging OAuth profiles or setting up a user experience where you sign up, verify your email, and make sure that you're logged in and maybe you can do a couple other things at the same time, I'm going to look for more of an end-to-end, because that user experience is the most important part. How does someone flow through this? The things they're doing, although it sounds a little complicated like merging profiles, not really that hard. You either do it or you don't. But how the different ways that they can find themselves in that area is the area where I want to test, so I might focus more on end-to-end. To answer your question with a simple response, depends.
Joel Clermont (06:55):
Sure. So the scenarios you shared were kind of more focused on what the thing is being tested, right?
Aaron Saray (07:05):
Joel Clermont (07:05):
Like, is it lend itself more to a unit test or does it lend itself more to an end-to-end test.
Aaron Saray (07:11):
Joel Clermont (07:11):
I was also thinking too as you were talking. Some of it is dependent on the project and the code base, right? If we come into a project and there's zero tests and it's kind of thorny code, it may not even be practical to start with unit tests. Certainly to build up confidence, those end-to-end tests can get you there quicker than trying to tease apart a 4,000 line controller function to try to unit test some of that. Would you agree with that? I mean, during the maturity of the project?
Aaron Saray (07:47):
Yeah, I think that's a good point. A lot of times when we take on legacy projects people ask, "How do you write unit tests for something that was clearly architected to be tested?" The answer is you write an end-to-end test around it and then you know how this functionality will work. As you start to replace those pieces of just spaghetti monstrosity code with more services and small units, model controllers, all that kind of stuff, the user experience and all the things that they do should always be the same. Those tests are basically your stop gap and they say, "I've made a bunch of changes and nothing has changed." Which of course as programmers we love to try to sell, right?
Joel Clermont (08:31):
Aaron Saray (08:31):
I worked for five weeks and as you see nothing has changed, so I was successful. But that's a main reason why you might use that end-to-end testing as well. Then you kind of start getting in as you rewrite certain little bits to write tests around that. One of the things that I also try to weigh in, because we want to give good advice but we also want to talk about real world stuff, is the length of time a particular test might take in comparison to it's set up. It's great to set up a complex scenario through the browser and fill in a bunch of form fields or target them with Dusk and say, "This 25 field form is filled in." Then we submit it and we go to second and third page and those all get gathered together for an end result where it sends off an AJAX request.
But when you want to test that through multiple different permutations, it might be good enough to test a few permutations of the frontend and then move to more integration or unit test level and just say, "Well, I know that the frontend works because a couple of my tests passing in data show that the data comes in predictably. Now I'm going to run something in a much more faster way without the browser to run through all the rest of the permutations." Obviously a silly example, but if I was looking at the state dropdown I could choose Alaska and Alabama, and make sure those two submit fine. But I'm not going to go through all 50 states, I might go through the 48 remaining with a unit test or something like that. Which, of course, is contrived example.
Joel Clermont (10:14):
Well, and you mentioned speed. It's not just the speed of writing the test but also the speed of running it. The end-to-end tests typically are slower to run because they are driving a browse and nobody likes if your test suite starts taking 10, 15, 20 minutes, it just gets a little ugly. So that's part of the calculus as well.
Aaron Saray (10:33):
I think the most important part though is that you try to decide to write some tests.
Joel Clermont (10:41):
Aaron Saray (10:43):
Because these are ways to kind of validate the work that you're doing to make sure it's still doing everything you would expect it to, it allows you to change things faster in those programs where we want to go faster. I will acknowledge that it does take a little bit of time to write some tests, especially when you're learning about testing. First of all, what tests do I write and how do I know it's a useful test? Then how many is the right amount of permutations? And all these different things. But just like every other skill that's great once you know how to do it, it's important then that you do this too.
Joel Clermont (11:27):
If you were to talk to my family or close friends you would learn that I'm no stranger to offering up a pun. I like, I guess, what you could call maybe dad humor, dad jokes. And I really enjoy sharing a pun that gets a groan out of somebody. I don't know why, maybe I'm a bad person but I find it enjoyable. However, I noticed some place where I really don't like puns at all and that's the nightly news. For some reason, there's always one story where the very professional newscaster makes a totally unnecessary pun, and I just like, "Come on, why are you doing that?" The other night they had a story on the nightly national news about a shortage of chicken wings and the poultry selection that was going to be available to restaurants. And it just made me mad and then I thought, "But I like puns so why does this make me mad?" Aaron, I need your advice. Why is it that that bothers me so much? And do you relate to that at all?
Aaron Saray (12:37):
Well first of all, what is the nightly news?
Joel Clermont (12:41):
The nightly... Well, you don't like the 5:00 or 6:00 news?
Aaron Saray (12:45):
I'm just kidding. I don't run into many people who watch the news. Did you sit on your Davenport and open up your ice box?
Joel Clermont (12:55):
Aaron Saray (12:55):
I don't know, I'm trying to think of old things. I guess the puns are because they're really not funny though.
Joel Clermont (13:08):
Oh, they're not.
Aaron Saray (13:09):
Because a good pun is something that came to you, whereas that there was some people writing this. First of all, there was a guy who wrote this or a woman who wrote this, right?
Joel Clermont (13:21):
Aaron Saray (13:21):
Second of all the news anchor then saw it and then might have even practiced it, and then they used their voice to say it. "And that is why he turned on the light," you know? Like, ah.
Joel Clermont (13:36):
That's true, I don't like news person voice.
Aaron Saray (13:39):
And that joke was approved and reviewed and HR to looked and it was like, "Ah." The amount of money you spent on this pun, that wasn't even that funny, just tell me the news.
Joel Clermont (13:53):
Yeah, I think you hit on some things there that hadn't occurred to me before. Because that kind of artificial cadence that a newscaster has mixed with trying to be funny with a pun, I think that combination rubs me the wrong way. Yeah, the forced nature of it, all the planning that went into it. The other thing I thought of is generally the stuff right before and after that lighthearted news story is pretty depressing. It's like it just always seems a little inappropriate to be trying to be funny. But, yeah, that was that was my observation. It just makes me so mad and I don't know why.
Aaron Saray (14:29):
Need a little help getting your test suite in order and efficient?
Joel Clermont (14:34):
Then head over to our website at nocompromises.io and request a free consultation.