No Compromises

Stepping into an unfamiliar codebase with a long history can be a challenge. Aaron and Joel share some tips on how to get started and build confidence that you're making things better.
  • 00:00 What is a "legacy" code base?
  • 01:00 Getting those first tests in a legacy code base
  • 03:05 Starting with unit tests can be hard
  • 04:20 Be extra careful with external APIs
  • 07:15 Onboarding a new project
  • 08:35 Getting more specific on the first few tests
  • 10:38 Silly bit
Sign up for our newsletter of Laravel tips.

Creators & Guests

Host
Aaron Saray
Host
Joel Clermont

What is No Compromises?

Two seasoned salty programming veterans talk best practices based on years of working with Laravel SaaS teams.

Joel Clermont (00:00):
Welcome to No Compromises, a peek into the mind of two old web devs who have seen some things. This is Joel.

Aaron Saray (00:08):
And this is Aaron.

Joel Clermont (00:16):
In our years, Aaron, we've inherited, I guess what you could call, legacy code base.

Aaron Saray (00:21):
That's a nice word for it, legacy.

Joel Clermont (00:23):
Yeah, and that means a lot of different things. I think one common trait I've seen in a lot of legacy code bases is, there are little to no tests as part of the code. All right. You're not only coming into something that's undocumented, old, but if you make changes to it, you're always afraid you broke it. I know we've, in the past, sort of started. When you're modernizing a legacy code base, one of the things we start with is by getting some semblance of test coverage. Just want to throw that out to you, like what's a good strategy you find for diving into a code base like that and getting those first few tests written?

Aaron Saray (01:07):
Sure. I think one thing to keep in mind is what's important about this code base? Is this code base probably supports a business? And therefore it supports business processes. Those are the things that we want to make sure that do not break if we're going to make some sort of change. Our job is to come in and make the code better, but really the business cares about, is the business still working the same way or is it faster? That kind of stuff. But it can't not work. You can't just say, "Well, I was upgrading PHP and now you can't sell your product, but it's fine."

Joel Clermont (01:42):
You can't checkout but it looks better.

Aaron Saray (01:44):
Yeah. But it's got the newest tailwind, so it's fine. I think that the first important thing is to make sure that you are considering adding tests that test business processes. That usually means some sort of like end-to-end or feature test, or something like that. You can accomplish that in a number of different ways. You can use a tool like Selenium or Cypress, and kind of approach it directly through the browser. Selenium, you could do some sort of programming and even PHPUnit and run those tests. Or you can use their plugin to record them.
Cypress, you can maybe hand it off to another team member too who knows JavaScript if you don't know it that well. You can write those JavaScript based tests with Cypress and see it happen in multiple different browsers. You can even use Laravel Dusk or something like that, again, to kind of put it on alongside the product in the project, and see how that would test the end-to-end processes. I think that would probably be a little bit later on down the line once you actually have maybe some Laravel framework and scaffolding.

Joel Clermont (02:54):
Right, yeah.

Aaron Saray (02:54):
But it's still possible to use it alongside of your project and test it that way.

Joel Clermont (02:59):
Sure. Yeah, that's very practical. I agree with you as that's a good way to do it. The other thing I've found in a lot of these legacy projects, is the methods you're trying to test are pretty interconnected and big. Even the idea of starting with unit tests is just sort of hard to contemplate because like, "Well, where do I even start with this?" And it just becomes this whole exercise and roadblock, so the sort of the outside end or the integration testing where you're driving a browser just like a user would, there's a lot of return there on that.

Aaron Saray (03:34):
Well, I think it's hard to even determine the first time through a code base what a unit of work would even be.

Joel Clermont (03:40):
Right, yes.

Aaron Saray (03:41):
When you click through the interface and it did a thing, you did a checkout for example, you can be reasonably certain that did enough of the thing to be successful and that's what you can write a test around. Because also when we're making our changes, I think we've all done it too where, you make a bunch of changes. You test it by hand and you make that one last quick change, but you're like, "Obviously, it didn't affect anything, so I'll just commit this and it's good to go." And lo and behold, it broke everything. So even having these little tests that you can run in an automated fashion over this legacy code base is good.
Now, there are some things I think that we have to worry about and I'll pose some of these questions to you, Joel, because I know you're very familiar with this. Is, how would I go about making sure that I'm not executing this code in such a way that it's hitting other production APIs? Because when I download it locally, how do I know it can run locally?

Joel Clermont (04:40):
That's certainly a place to start, is you have to be able to run the code locally. But to your point, maybe this legacy code has things like sending an email, or pinging some order update service, or whatever, and those processes are hard-coded. There's no built built-in mechanism to say, "Ooh, we're in a test environment now so we'll send this to mailtrap," versus the live mail server. So, yeah, that's really important to figure out. I don't know that I have a perfect formula for doing that, but a lot of times... In fact, I was just looking at a codebase with like a checkout type process that did some of these external third party interactions. I just looked at the code.
Like, when I click this button what is it going to do? I kind of drilled into it and it's like, "Oh, that's going to send an email to the user. It's going to send an email to customer service. It's going to queue up a notification in Amazon and that's going to do a bunch of things." I was able to figure out a bunch of places to wrap those things, extract it into environment variables, and then I can more safely test and know that I'm not messing with production data. But, yeah, especially when you're first stepping into the codebase, you have to be meticulous to make sure that isn't happening. That's not a surprise you want.

Aaron Saray (06:02):
I think there's been a couple of examples of really difficult pieces of code that I haven't necessarily been able to follow through everything. In cases like those, if you can get them running in something like a vagrant-based, or virtual machine or a Docker, there are ways to turn off network connections and stuff like that too.

Joel Clermont (06:21):
Oh, okay.

Aaron Saray (06:21):
Turn off outgoing connections, it's obviously harder to explain on this quick podcast. But you could google how to disable outgoing network connections, set up a firewall that stops all outgoing connections and therefore when you're doing your work, then you'll know that it was never actually hitting anything. I mean, it's not great, especially if we have a mailtrap. We want to go and connect to them, it'll stop that too. But at least you know, it won't ping the production stripe account with some big charge or something.

Joel Clermont (06:52):
Yeah. Well, especially if it's depending on how thorny the codebase is, that might be a good way of just kind of do a first pass to be like, "Oh, what are all the things I'm going to bump into?" Extract those into variables that are environment specific and then after you think you have it all, do it one more time with that and make sure it didn't hit anything you weren't expecting. I like that approach, I expect a full report on my desk on how to set that out later today.

Aaron Saray (07:16):
In the past, I've seen someone show me maybe code when I first joined the team. We went through a whole process and maybe I didn't have a chance to record it and follow it through, but I wrote down some notes. One of the things I like to do is take that time then invest in writing one of those end-to-end tests then and duplicate what they just did trying to set all that stuff up. I gained two things from that. One, I've now written a test that's good. And two, I understand what they did. Now it's fresh in my mind, I can ask some questions when I'm setting up my scenario. Like, "Oh, I tried to do this. Maybe I missed something or whatever." Versus just trying to hope I just caught that when they showed me real quick.

Joel Clermont (07:59):
I like that idea of reinvesting the time that you spend to learn something into something that's automated. I've even done it if I'm not quite as diligent as you in that scenario. At least to add it to the read me or like to build a test plan document, where it's like, "Okay, I'm going to come back and write more tests for this. But for right now, like, here's the smoke testing happy path just to make sure it works and know how to capture that so the next person, or me next week knows how to write those automated tests."

Aaron Saray (08:30):
Right. Definitely me next week. After a long weekend you're like, "Wait, what did I know about this project?"

Joel Clermont (08:35):
I've been there. We've shared some good advice, I appreciate that. If we had to get more specific. We talked in general, the style of test we do. But literally, what might be the first test you write or the first thing you do when you're introducing testing to a legacy codebase?

Aaron Saray (08:55):
That's a really good question. What I think I do is I start with the very simplest thing to make sure I can just do that. The very simplest thing is, I load up the homepage and then I click the about or contact page, and I validate that that worked. And maybe if it's a contact page, I submit the contact form. Now, maybe this is a whole site for checking out doing e-commerce or whatever, and the contact form is one small little part. But now I've developed one test that's reasonably simple, and now I can basically put that and I can start to build off that. The second area I'll focus on is either creating an account or logging in. Either one of those is just... If you support that whole authentication piece as well is pretty important. I would say end users aren't happy with bugs, but they're really unhappy if they can't log in or they can't create an account.

Joel Clermont (09:52):
Right.

Aaron Saray (09:54):
So those are the two. It's just very easy top-level navigation and then maybe login and stuff like that. Then after that, once you've got all that going, that's when you have to engage with the business stakeholders again and say, "What is the most important thing that happens on this website?"

Joel Clermont (10:14):
I like...

Aaron Saray (10:14):
So, it's buying this so that you go check that out.

Joel Clermont (10:16):
You're kind of building momentum. You're starting with a relatively easy test, which you're going to have some hurdles just to get it running in a test environment. But you're not biting off the largest piece of the app, maybe the most important piece of the app, to start. But starting simple and kind of letting that momentum build and add more tests over time. Nice.
I've been spending a little more time than previous years doing schoolwork with my kids, one of whom is in first grade. One of the things that's really hard to explain to a first grader is why words have letters in them that you don't say, all right? Recently the word was should. Do you hear an L in the word should? I don't hear an L in the word, should?

Aaron Saray (11:13):
Should, should.

Joel Clermont (11:16):
No, -uld.

Aaron Saray (11:17):
Yeah.

Joel Clermont (11:18):
No, so you see the dilemma? But then it gets even weirder because that same day we were doing something related to the human body and it pointed out shoulder. So if you have the word should, which is spelled the same way as like the first part of shoulder, now try explaining that to a kid. Why that one has an L in it, why they both have Ls in it but you pronounce it. I guess the question for you, I've been thinking a lot about this probably more than is healthy, but I only know English. I don't know if other languages are as messed up as English, but have you bounced into any weird English-related things that have really either annoyed or confused you.

Aaron Saray (12:03):
Well, I think I can say that I don't understand how to say and spell soldier. There sounds like there's some Js and stuff in there.

Joel Clermont (12:16):
Yeah, okay.

Aaron Saray (12:17):
Or judgment. It sounds okay, but why is there an E in there? Certain words we have Es in. But, well, I guess the one that is always been on my mind because so many people use it incorrectly and it just drives me absolutely insane is to, too and two. Like, you know-

Joel Clermont (12:40):
Like the '90s R&B band, Tony! Toni! Tone!?

Aaron Saray (12:46):
Who are you? No, T-O, T-O-O and T-W-O.

Joel Clermont (12:51):
Yeah.

Aaron Saray (12:52):
We couldn't think of different words?

Joel Clermont (12:55):
The thing with silent letters, I was puzzling on it and I actually got in this rabbit hole on Wikipedia, of all places. And I learned an interesting fact. In English the only letter that is never silent is, drum roll, the letter V. Every other letter, they had examples where it was silent, so don't try to think of one because I did and I never-

Aaron Saray (13:25):
I know. I really wanted to think of one. No, I say that does sound like a challenge explaining it to a first grader or many of the adults that listen to our podcast as well. Especially if English isn't your first language, I feel bad there too. I guess I don't really like silent letters and that's why you'll find me ending most of my conversations like this, bye.
I know sometimes you wish that you could hear from us weekly. If that's the case, I got something for you.

Joel Clermont (14:01):
Head over to our website, nocompromises.io/tips to sign up for a free weekly newsletter.