The benefits of testing, even under a deadline
Joel Clermont (00:00):
Welcome to No Compromises, a peek into the mind of two old web devs who have seen some things. This is Joel.
Aaron Saray (00:08):
And this is Aaron.
Joel Clermont (00:15):
When you're facing crunch time on a project, its deadline looming, a lot of times you start thinking about how can I make this go faster? I wouldn't be maybe the only dev to think dropping some tests would be a way to make you go faster, but I know, Aaron, you're in full agreement with me. I know that. But actually I'm just saying that to set it up because the thing I wanted to talk about is having recently been in that experience and actually still found value in doing testing. It's not an absolute. If you really are super, super crunched, you might have to punk on some of the testing. But in my case, especially with some of the things I was working on, I actually found... I think it actually made things go faster at least to the extent that I felt more confident it was going to work close to the first time and not have a bunch of weird bugs once I started manually testing it. Let me pause and get your insightful reaction to that lead in.
Aaron Saray (01:17):
Well, first of all, I'm not sure if we mentioned this in our earlier episode, but when we say skip testing that could be a couple different things. The only way that I really kind of skip testing is still writing my test methods and then I have like a protected method I call This To-Do. Which basically just marks the test as incomplete. So if we get to crunch time and you have to skip some tests, for example in your example, you might still have the test written but there's just a bunch of eyes that are in your output. And the reason that's done is because I believe you're freshest understanding how your code works while you're working in it. So when you come back, maybe if you come back for testing later, maybe you're looking at it but you forget the intricacies of it so you don't write good tests. Or, it's really easy to forget to come back to tests when there isn't something nagging on you, which is like all these different eyes that are on your output.
Joel Clermont (02:14):
Yeah. We're not complete animals, you're right. We do that, I'm glad you clarified that. I'll even throw some comments in there too. Like, if it's the success path and I might be like, "Well, make sure you try it with this and with that." I'll even throw some comments in addition to the To-Do.
Aaron Saray (02:29):
Yeah. Then I'll get back to your main topic, but one of the things that we'll do then is then on future projects too or future task, we might do the task, do the test for that task, and grab one or two incomplete test from what we have as incomplete notes stack and do those. As Joel and I work together, we kind of have this rule that if you're in a project as incomplete task, you should probably at least try to write one other test while you're there to knock this out. But to answer your question or to kind of focus more on what we're talking about again, I think it matters the kind of way that you do testing on how all this stuff kind of hits together. Whether you do pure test-driven development where you write all your tests first and you write code to satisfy those, whether you write all your code first, test it in your command line or your browser as a human and then write test after that. Or, if you do some sort of mix of that.
Joel Clermont (03:25):
Yeah. I mean, I can kind of share my approach and I think it's colored a little bit by what it is you're actually writing. Just thinking recently project was really focused on building an API, so kind of pointing and clicking around to test things isn't as convenient an option in that case. I found myself leaning much more heavily on a test-driven flow but I would omit the word pure because it certainly wasn't that. There were certain things, like, we get repetitive where it's like, "Well, this route you have to be authenticated, you have to have an activated user. You have to be doing stuff within the scope of your user account." Like, there's all this boiler plate where I wouldn't write that test, watch it fail, and then go add the middleware. There's just a bunch of stuff I would do upfront, like I knew it was going to be the same on most of these routes.
But then, this is the way my brain works, when I got past sort of the boiler plate stuff I would work through all the failure cases, like invalidation, and I really would not... I would write the test first. So I would be like, "Okay, submit an empty payload. These eight fields should have come back with required." And then I would go add those fields, I'd watch the tests fail, then I would go add those fields and add the word required. And it felt a little stupid for some of the very basic things like that. Like string, required, email, things that there's not a lot of nuance to them. But especially when it gets into required if or regex patterns, or things like that. I really, really found that valuable to know that it was working.
Aaron Saray (05:06):
So you're basically saying is kind of you'll set up some sort of core functionality and then you'll write your boiler plate, like authorization test and then you'll write your failure test, and then you code for the failure test putting in the proper... and then you might maybe then swap back and forth? So you kind of go back and forth with how you code and your test.
Joel Clermont (05:26):
Yeah. So once I get all of the failure cases done maybe like handling exceptions, validation, errors, things like that, then I'll do the happy path. And sometimes there's even more than one happy path but that's just the way my brain works. That way when I get to the happy path, I know everything else is working the way I want and I can kind of move the request validation, the authorization logic, I can move all that to the back of my head and just focus on, "Okay, now what is this controller action supposed to do?"
Aaron Saray (05:55):
Yeah. I like your approach, I wish I did it exactly like that. I tend to write more of the logic and I'll do the authorization tests. So I'll write to what I think the logic is, I'll write the authorization test, I'll write the happy paths and then I'll move on to the validation stuff after that. And that kind of gives me some flexibility because a lot of the validation stuff we're reasonably certain it's going to work.
Joel Clermont (06:22):
Right.
Aaron Saray (06:22):
I mean, it's just a text word. So that gives me some time that I could mark those as to-dos if I get crunched or something. So I want to make sure at least it successfully works. And any other thing is, I'll try not to go on too much of a rant, but my brain works as in thinking through all the different processes kind of at once. Like, not a huge big picture but I kind of think in a big block of what I'm solving and I think through all those different things and I keep those in my mind. Not everyone does that. And I'll tell you what, a lot fewer people do that than they think they can. It's the same thing as people who think they can multitask. Like everyone that says they can multitask like less than 3% of people actually can. I would like to say I can, I can't. I just know that. Like, I've learned that and part of is just admitting it to yourself. It's the same thing about programming is, most programmers can't keep all this stuff in their head.
So the way that you're describing it makes perfect sense that you would take little chunks and write tests and alternate back and forth. For me, it's like the one thing I have as a skill I guess is I can keep something in my brain and not lose it even when talking to someone because I work with patterns. But the point of all this is that it kind of depends also on your programming workflow. And you really should kind of architect your testing functionality around your weakest spots. So if you can't keep something in your mind for a long time, if you've ever said, "Oh, I have attention deficit," whether you're diagnosed with it or not, if you've even joked about that then you should be writing your test as you kind of go or even before. If you can keep things in your mind and you look at everyone in the world maybe like, "Why is everyone like so distracted by a bird?" Well, maybe you can kind of do it in bigger blocks or something.
Joel Clermont (08:07):
Yeah, that's a good point. I like that idea of kind of structuring your approach based on your strengths and weaknesses. Maybe just to come back, because we sort of set this up talking about the time impact and the productivity impact. So where I really found at payoff is, again, this project where you're building an API, some other team is building the frontend client for it, right? Well, often you give them the API and they're like, "Well, what about this?" And you realize, "Oh, yeah, actually you're right. This needs to change." And so when you get into those more frequent than I'd like changes, having those tests there was just so I could move so much faster because now I had all that regression stuff in place. Like, maybe we were tweaking how the validation worked, I could just go focus on that but then run all the tests for that route. And oh, we're changing that validation now this other thing through an exception that I wasn't expecting. That's really where I saw it start to pay back that initial investment pretty quickly.
Aaron Saray (09:07):
I think you're right with that. And also the more complicated a task gets from start to finish, the actually more easy it is to set up with a test than it is to execute that. So when you're talking about testing these endpoint right away I'm like, "Well, why don't you just open up paw or anything-
Joel Clermont (09:23):
There was some of that.
Aaron Saray (09:23):
... tool like that?" Right. But you know if you have to set up like, "I need three users. Two to do this, two to do that," like the amount of work you'd have to do the set that up, you can just do that with fact or reason or relationships and stuff inside your test.
Joel Clermont (09:34):
Oh, yes.
Aaron Saray (09:35):
So you talk about, it's saving you from yourself with the coding bugs and stuff. I look at the tests as also shaving enough time to set up all this scenario as I need.
Joel Clermont (09:44):
Yeah, that's another great reason.
Aaron Saray (09:52):
So I'm a bit of a musical person. I learned to play guitar when I was a teenager and I was always kind of writing music. And my brother is 11 years younger than me and my sisters are like younger than that still. I think I was about 16 and so I'm playing guitar, I'm writing heavy music. I'm into things like corn and stuff like that. Just playing like, you know. So I decided that I wanted to record some music and record a song, but at that point I can't sing. So I do the only thing that you possibly can, which is you bring your little five year old brother down to your bedroom and say, "We're going to record some music." Because of course he's your little brother and he loves you and he looks up to you. He's like, "Yeah, I want to record music." So we're playing this song and I found it the other day and I played it for his fianc� at the time. And she was like, "Oh my goodness." I'm like, "I should have kept it." I should have played it at their wedding instead.
Joel Clermont (10:56):
Oh, that'd be perfect.
Aaron Saray (10:57):
But so we're playing. It's like, "Hi, my name is Jordan. I'm five years old. Every time they try to hold me back, I break the mold. Yeah."
Joel Clermont (11:08):
Wow, that's pretty good for a five.
Aaron Saray (11:11):
Yeah. I mean, I fed him the words. But you got those like five year old high pitch little boy voice. Like, "Hi, I'm five years old."
Joel Clermont (11:21):
That could be a new genre of music.
Aaron Saray (11:25):
Oh, yeah, absolutely. The baby metal.
Joel Clermont (11:31):
That's right. Wait, there's something under (inaudible 00:11:32).
Aaron Saray (11:33):
Yeah.
Joel Clermont (11:37):
Aaron, you should go to masteringlaravel.io and learn some things.
Aaron Saray (11:42):
Don't test me.