Joel Clermont (00:01):
Welcome to No Compromises, a peek into the mind of two old web devs who have seen some things. This is Joel.

Aaron Saray (00:08):
And this is Aaron.

Joel Clermont (00:16):
It's another special episode of our series we've been doing on how we work on projects. Today we're going to take maybe a slight detour and talk about a special kind of project which is the legacy project. What I mean by this is a project we are not starting fresh. It is coming to us with history, maybe you could say with baggage and maybe with or without any sort of documentation. I know you and I have a way that we kind of onboard a project like that and approach it. And I thought it might be useful to kind of talk that through today.

Aaron Saray (00:55):
Sure. Yeah, I think it starts out a little boring, you know the whole process, but then it really starts to kick off and then you get comfortable with it. I think the first thing we always talk about is planning, right? As programmers we're like, "I just want to get in their code, look at their code," or whatever. That's great and that's going to be part of it but the very first thing we kind of do is develop a plan of what do we all need to do even? We're going to talk about those steps. For this particular project, what are the steps that we need to do? Then to try to understand some of how the project even works. So that can be a mix of talking to the client and asking them to kind of do a demo for us or writing our own documentation as we go through, or a mix of that. I guess the first thing is a demo. Sometimes we're not the best at this but we need to get better. And this is what I want to do, which is to make sure that we can see a whole demo of all the features and ask questions along the way. In reality, a lot of times happens is we just get a demo or a recording of a demo of a couple of the happy paths, but that's better than nothing, right?

Joel Clermont (02:08):
Yeah.

Aaron Saray (02:08):
So the first thing is to kind of understand how the project works.

Joel Clermont (02:13):
Yeah. And just related to that, one reason that can be beneficial, of course, it gives you context and you kind of understand maybe what are the more important parts of the application, those tend to be with the demo. But it can also help, like, they're clicking through and there's four menus. "Oh, yeah, we don't use those, we don't even know what those are for." If you didn't do that demo and get that context, you might struggle. Like, "Why doesn't this entire menu work over here?" "Oh yeah, we haven't used that in five years," so nobody ever took it out and it's broken. But we could have wasted a lot of time trying to figure that out without getting that context.

Aaron Saray (02:49):
Yeah. I guess I probably said it in passing but I'll say it again because it's really important. The biggest change or the biggest difference that I've made is to record these demos when they're happening because then later on I can go watch it again, at least for me. I was going to say you're going to forget but at least I know I'll forget everything I just saw almost immediately after I've seen it.

Joel Clermont (03:10):
Sure. No, I agree with that.

Aaron Saray (03:12):
So I keep the collection of the videos of some demos and you can watch it again. Or, in some of the larger ones, I'll admit to this, I've even kept a note with it and kind of said at what minute/time a certain feature was covered when I watched it next time. So that way I can go back there and then go forward and backwards. I'll tell you why that really matters on a project I just did recently is, their test setup was very poor and so it was a combination of there was multiple, many, many steps to set up a test account and sometimes that didn't work in their test environment. And I was rebuilding some of these interfaces, so there wasn't even a good chance I could go in there and re-look at the interface again. Luckily, I had this video and a good portion of this stuff I did was based off kind of going frame by frame in some of the videos, seeing what's in the dropdown boxes for options and stuff like that. I wasn't perfect, I later on compared it to some of the code too and did it. But it saved me a lot of time having those videos to be able to go through.

Joel Clermont (04:21):
Yeah, absolutely. Once you get that demo, I tend to also kind of look at docs and maybe we've just had a really unfortunate series of projects but it's pretty rare that in my experience to get a project handed off that has robust documentation. There might be a README file and it might be-

Aaron Saray (04:42):
Or docs, yeah.

Joel Clermont (04:42):
Or docs.

Aaron Saray (04:42):
I was going to say or any docs, yeah.

Joel Clermont (04:46):
The README is like the default Laravel Installer README. But generally speaking it's pretty rare to get docs that tell you everything you wish you wanted to know. I'll even admit for our projects it's really hard to capture every single piece of knowledge in documentation. But that'd be the next place we go.

Aaron Saray (05:06):
I think it makes sense you mentioned the README. The way I kind of look at documentation for projects too, I'm not going to go too far into this, but is the README with the code should be the technical steps to get that code running and/or deployed. Then a Wiki in GitHub or Confluence, or something like that, would be the area that you put your business stuff.

Joel Clermont (05:27):
Sure.

Aaron Saray (05:27):
So your domain requirements, understanding how the business works and stuff. If you sort of put that in a code base it kind of... programmers, you have to kind of discipline yourself to write that so it kind of needs to be a separate process. Also if you're putting too much technical stuff inside of the Confluence or Wiki, it tells me that you don't have stuff automated properly.

Joel Clermont (05:52):
Yeah. You mentioned how to get the code running and that's kind of where my mind goes next is actually getting my local dev environment operational. Can I run this code locally? Because to me that's a critical, you can't do development if you can't run it local. Some people we've worked with maybe had other methodologies, like shared online environment or something like that. But for us that's sort of a deal breaker, we have to run it locally and while we try to get that running, kind of going back to the docs, that's where we put those steps. Because then the next person should be able to check out the repo, look at the README and get it running. I'd like to say in less than an hour, if not less than 30 minutes, right? That's the goal think.

Aaron Saray (06:38):
Yeah. And 95% of the time it'll work

Joel Clermont (06:42):
Yeah, sure.

Aaron Saray (06:43):
Versus zero and we got it or whatever. You can't promise it's going to be perfect but then it'll be down to like in half an hour you got it running and there's maybe zero problems or one problem. Not like, "How do I get this to work again?"

Joel Clermont (06:56):
Right, exactly.

Aaron Saray (06:58):
Then the next challenge we kind of have is unit test, integration, feature test, all those different things. Depending on the project, how it's written, maybe there is a test suite available. A lot of the projects that I've taken over and we've taken over that were very legacy didn't have anything involved whatsoever. Then we go and use specific tools to go and write more end-to-end tests. You have it running locally and maybe you don't necessarily do anything but recede the database to a specific thing when the test start. Because ideally in unit tests, it should be always predictable and always have fixtures or whatnot.
But when you get a legacy project, a lot of times what we'll do is we'll have some more of these end-to-end tests on the outside of the application, whatever language it happens to be. And exercising those in the browser on our local version to kind of make sure we understand how things go. The reason that's important is two things. One, it also again continues to help us understand how the application works. So you have to understand how the application works in order to write the test. I mean, you have to know where to click and what you should expect. If you want to write tests then click and expect things, right?

Joel Clermont (08:13):
Right.

Aaron Saray (08:15):
Then the second is, it's one of those protection sort of things to make sure we don't introduce new regression as we're refactoring some of the code.

Joel Clermont (08:22):
Yeah, that's a good point because we're obviously not going to inherit a project. Step one is do absolutely nothing until we've established 100% test coverage. That's just not practical. But, yeah, some basic kind of smoke tests like, can you log in, can you do these important things? And then when we're getting to an area we are going to make some modifications, writing some tests around that sort of as you go.

Aaron Saray (08:47):
Yeah. I mean, a kind of rule I like to say is like you said, "Log in and then whatever is the main features or the revenue generators for that company, for that application we want to do those." Things that we don't maybe need to necessarily write tests are on right away. Things like reports, right?

Joel Clermont (09:05):
Yeah.

Aaron Saray (09:05):
They're great. I mean, they're necessary but write them when we get to them or whatever.

Joel Clermont (09:10):
Just the roller coaster of emotions I feel when you first check out that repo and you're like, "Okay, there's a test folder. Ooh, there's like a feature and a unit folder." Then you expand it and there's one file in each called Example Test.

Aaron Saray (09:23):
Example, oh.

Joel Clermont (09:24):
You're like, "No we were so close. I thought we had something."

Aaron Saray (09:28):
Don't get me started on a rant of when you see a super popular package too and you... That's the first thing I do, is I go look at a test folder and there's no tests and I'm like, "How is a million people installing this without worrying?"

Joel Clermont (09:41):
I've had that same thought. All right, so maybe just one last topic to touch on. I'll kind of put a hybrid topic here, is version control. I'm assuming and really we've had pretty good success with the projects we inherit at least have version control. So that's kind of like-

Aaron Saray (10:00):
Most of them.

Joel Clermont (10:00):
Most of them, yeah. I mean. If not obviously that's step one. It's like we have to have that. But a CI pipeline or some sort of automation of a build or running tests or something, we like to have that and we would certainly set that up in a new project. But I thought maybe we could talk a little bit about how we approach that with a legacy project as well.

Aaron Saray (10:22):
Yeah. So when you say like a CI pipeline, what are you all including in there and why does that matter?

Joel Clermont (10:28):
Okay. So for me, the most important thing is some sort of reproducible automatic thing that verifies the project is not completely broken. So kind of the easy thing you put in there is a linter, that's trivial to set out. But then the next thing would be running tests so obviously that has less value if there are no tests, but we're going to write some of those tests we talked about. But just getting that in place so that as we push code, as we open a poll request, things like that, that at the very minimum it's running those sorts of basic things automatically.

Aaron Saray (11:08):
Right. Yeah, I would agree with that. To me the way I've always struggled and I think we've had conversations about this too, is the CICD so the deployment part of that.

Joel Clermont (11:20):
Yes.

Aaron Saray (11:20):
Is it important to have the deployment automated at the same time? I think that there are reasons why and reasons why not. I would say the biggest reason is not so much automated on a deploy, but reproducible. So I want my deployment reproducible and hopefully automated.

Joel Clermont (11:39):
Well, that is usually one of the biggest question marks when you inherit a project. Is like, "Well, how do I get new code up there?" Because people have all sorts of bizarre ways of doing this. I'm thinking of one in particular where there was a Git repo on the production environment but there's all sorts of hidden little things you had to do after you pulled in the latest code, reset permissions or run this other file or restart this service. For this one in particular, we haven't yet automated the deploy but it's in the README. Like the exact steps that I run every single time. Now that you have that and you kind of get some comfort, you've been doing that for a few weeks or even a couple months, then we can revisit like, "Okay, is now the time to automate that?" Because we're really confident we've kind of figured out all the weird edge cases about deploying.

Aaron Saray (12:31):
Yeah, so I think that makes sense. When we get these new projects, we want to make sure we understand how the project works, generate some documentation norm or honestly make sure the documentation is up to date. Sometimes there's documentation and it's like, "Well, that was how it functioned two years ago," but that's not helpful either. Then move through understanding how it works, running it locally, running unit tests and then some sort of automated CI tool and possibly automated deploy or whatever. That's kind of how you take a project, at least how we take a project over. It's really exciting to get a new project and just want to run and do some new changes or... Also, I guess having that conversation with the client when they think that when you take it over, you should just be able to do the exact same thing as their last team was doing.

Joel Clermont (13:24):
Yeah. Day one taking over the project, we're not going to fix that bug that's been in your backlog for three weeks, that's for sure. But just to your point, having that conversation... in fact sometimes we've established an initial kickoff phase of the project that's really just dedicated exclusively to onboarding the project. And we'll kind of timebox it, we'll set a scope, we'll set a flat fee. But that even further communicates to the client like, "All right. We're just getting our feet wet here. When this phase is done, then we're going to get into development." It doesn't mean we won't continue to onboard and improve the project, but at least that phase is over and now the active development phase has a discrete start to it.

Aaron Saray (14:13):
Are you more of a tea person or a coffee person, Joel?

Joel Clermont (14:18):
Aaron, I'm drinking coffee right now. I like tea but I would definitely... If I had to pick a camp, coffee person.

Aaron Saray (14:23):
You said it like I should just know that you're drinking coffee. Yeah, you have a mug but I don't know what's in there.

Joel Clermont (14:29):
I was going to say we're on a video call.

Aaron Saray (14:33):
No, I want to be a tea person and I keep collecting tea devices, like different ways to make tea. But maybe someone can let us know if there's an easier way to collect clean out tea devices too. Because I have this little bowl that you put like tea leaves in-

Joel Clermont (14:51):
Yep, loose tea.

Aaron Saray (14:52):
... then when you make your... Yeah, you make your tea and then you have like this compacted little ball of gunk, and to me it's already been half ruining my experience. I'm like, "Oh, let's just tea." Then I'm like, "Ah, I got to clean out this garbage puck."
Do people even use bookmarks anymore or they just type mastering laravel.io right into the browser?

Joel Clermont (15:12):
I know the site is important to you, Aaron, because it's something we've been working on. But for others, please visit masteringlaravel.io to see what we've been building.

No Compromises, LLC