Changing our mind about queues in testing

Joel Clermont (00:00):
Welcome to No Compromises, a peek into the mind of two old web devs who have seen some things. This is Joel.

Aaron Saray (00:08):
And this is Aaron.

Joel Clermont (00:15):
Over time, we've picked up some things that we find work better than other things and it changes. And one of those areas that we were talking about recently had to do with a particular environment setup for testing. You know, we used to... I'll get specific here and we can get into the conversation-

Aaron Saray (00:35):
Actually, I think I can interrupt for just a second. You said something that I found really interesting.

Joel Clermont (00:39):
Okay.

Aaron Saray (00:39):
Is, well we do the best we can basically but that changes. I think before we go on we should admit that, at this current time and at the current time of all the podcasts that we record, whatever we say is our best interpretation, our best guess of how to move forward.

Joel Clermont (00:59):
Is that like a disclaimer for all of the-

Aaron Saray (01:02):
No. Well, no, because what I was thinking about is now that we've been doing this for a while, what if there are some things that we have changed your mind on?

Joel Clermont (01:11):
Oh.

Aaron Saray (01:11):
Do we have to be quiet about that? Or can we record it on another podcast and let people know that, "Hey, whoops. You know, we learned something new." you know? So that's the point of this. Is saying that, "At this time this is our best understanding of stuff." And that's actually healthy and that's what normal for all programmers is we got to make sure that we're always constantly learning and stuff like that. And I think it's very dangerous to fall into this sort of thing that whatever you know now is the best way of doing it. It's the best way with all the information available and with the way that you've synthesized it at this time. Some people just get locked into stuff. I know I have.

Joel Clermont (01:49):
Oh, yeah.

Aaron Saray (01:50):
You get locked into a certain idea and then you just don't let go.

Joel Clermont (01:54):
Yeah, and I think sometimes we even preface it too. Because some of these decisions are contextual to your project and your team, and different things too. But that's a good point and I am personally always happy to share something new I learned because it's a constant thing. So I don't feel shy about that at all. All right, good aside. Getting back to the particular topic though is testing. When we run our tests, you have a different environment available to you, you can set it up different ways. We use phpunit.xml to set up environment and other different settings that we want to change. But one is the queue driver. So going back a few years, we would typically set that to sync in the testing environment because we don't want it to get queued up into Redis or something, like, while the tests are running. That's a little weird so sync felt like a good alternative. And recently we changed our mind. We said, "You know what? I think null is actually a better driver." So I wanted to kind of talk through that decision, how we reached it and what benefits we found on today's episode.

Aaron Saray (03:07):
Sure. The first thing I started thinking about when this idea came to me too was, "What are listeners in, what are jobs in, how do those apply to controllers in console commands and stuff?" And I started thinking about it and I really thought, "You know, a listener in a job without too much argument getting way to the detail is almost the same level and same sort of experience as a controller." It's almost something that I need to be able to test again. I can test my controller, I can test my job. It's like with models. For example, you might test a model method, how it's used in one of your controllers but in order to do something faster, we talked about integration tests and we'll just run through all the rest of the stuff inside of the integration test version of it. It's the same sort of thing for jobs is, yeah, they maybe are part of one process and so we want to test them as part of that process to make sure they're kicked off. But we don't necessarily need to know that they were successful because they are a whole other command or a whole other set of logic that we're going to test on our own.

Joel Clermont (04:21):
That's good and I think that's a good way to set up how we reached the decision but I'm just going to repeat it back to you. Because I think I heard what you said but maybe just to clarify a little bit. It's not that we are not testing the logic inside the job. Let's just say a job can do one of three things or there's like three possible paths through it. It's just that where do you test that? Like, if a controller emits that job and let's say the controller itself has three different happy paths through it. Now you start multiplying those things so it feels a little much to test all the permutations of the job in the context of the controller. So you're saying, we would have a separate integration test for the job. Then maybe to take it to the next step, then what do you actually test in the controller related to the event/job?

Aaron Saray (05:16):
Well, so since we have it in our controller no matter if it's in sync or in null we have to fake it out and then test that it was emitted. I guess the difference is whitebox versus blackbox testing in a way. This is getting kind of deep but the point is you might write tests differently based off of if you know how the code works and you can trust what you can read. Now that's a really slippery slope because then, well if you can read it why you need to test it? And all that kind of stuff, right?

Joel Clermont (05:48):
Yeah.

Aaron Saray (05:48):
But in this case there's actually a little bit of a difference is, when we do our feature-based testing we're supposed to look at what the incoming data is and what the results are. And you're not necessarily supposed to understand the interiors of it, you're testing the start and the end. And then reasonably certain that if your code is correct your start and end will always be the same thing. So that's the point of the test. Really when you're running jobs in queue listeners and stuff in sync, you don't necessarily even test that they exist. I mean, you might fake them out or whatever but really what you're going to test is that the result of what they did is done. So you have a benefit of, like you don't necessarily need to know that there's a job in there or if it's a blackbox test, you wouldn't know that someone issued a job. You just know that after this is done at some point all these things happen. Whereas, if you know a little bit more about the code, you can say, "Well, I know that a job is kicked off. I don't know necessarily what happens in there, but that's fine because I don't need to know. All I need to know is that this job gets kicked off with the proper parameters."

Joel Clermont (06:57):
Ah.

Aaron Saray (06:57):
So then you can sync that. I'm sorry, you can put that in null queue and you can then assert that that was kicked off in that sort of thing. But you don't have to go further on down that whole chain of understanding what the jobs end result is.

Joel Clermont (07:15):
Right. I like to visualize things in my head, especially as it pertains to how Laravel works internally. I'll kind of share my mental model of how this works. So when you say Event::fake(), right? All that's doing is it's intercepting anytime an event is dispatched and tracking it internally for later test assertions. So if you Event::fake() regardless of what the driver is, null, sync, Redis, whatever, it's never going to be put into that configured destination, the fake is going to prevent it. And the destination, what I'm calling the destination, queue=null, queue=sync, is if you don't fake it where would the framework deliver that event to be processed by some listener or some other thing happening?

Aaron Saray (08:09):
Yeah.

Joel Clermont (08:09):
So they're kind of two different things. And I think if you don't have that mental model, it might be a little confusing. Like, "Well, if I fake it, like what does it matter what the driver is?" Well, that's the point we're making. Is the driver is important for those scenarios where you're not going to fake it, but you don't want it to run because it's not the thing you're testing right now.

Aaron Saray (08:28):
Yeah. You know, the examples that we use in a lot of our projects only have maybe one event and listener wired up. Not total in the project, but event to listeners one on one or maybe there's just a single job or something like that. This becomes more important when an event maybe has four or five listeners on it.

Joel Clermont (08:48):
Sure.

Aaron Saray (08:48):
And maybe you're going through a loop of things and you're issuing 27 different jobs on something, then this becomes much more important.

Joel Clermont (08:55):
Yeah, that's a good distinction. I've heard some developers just don't like events in general because it is sort of this... it gets complicated fast. We try to strike that middle ground, not to get off topic here, but to not just fire events for the sake of decoupling logic. Like, that's not the mechanism, the events are used for a very specific thing. Either we're tapping into events Laravel itself is already emitting, so we can customize some behavior or extend some functionality. Or, it's just something when a user is hitting a controller, we wouldn't want them to have to wait for the results of that job to run. Like, sending an email is maybe one example. One other thing, and I know this wasn't the driving force. But looking back at our discussion on it, it also can affect performance of your tests, right?

Aaron Saray (09:46):
Yeah.

Joel Clermont (09:46):
In the scenario of sync, maybe nothing bad would happen if those jobs run. Like, it's not going to affect the reliability of our test, it's not going to do anything bad like that. But even if it takes, you know, half a second or something for that job to run, why add that to your test? I don't think anyone has ever said, "Oh, my tests run too fast." It's another reason why when we look back on this decision, we think, "You know what, that made a lot of sense? Set it to null." Doesn't prevent us from faking and asserting but it does free us up when we're exploring multiple happy paths through a controller to not always have to worry about the event if they're not relevant for every happy path.

Aaron Saray (10:25):
And that speed definitely is affected... Let's just say there is an observer on a model and you're creating a model and the observer is set up in your application that it creates a hundred child records. And that's what you want to have happen. And normally maybe in your application it happens on a queue, so the users never sees that it does a hundred more queries. But it does, so now let's just say you're running through all different ways that this can be created. Validating stuff, you know all these different things. In the background, it's creating this hundred models each time that you don't really don't need because you tested it once and you know what's going to happen if it's ever created.

Joel Clermont (11:08):
Seems like some of our stories tie back to the grocery store and I had a recent experience I want to share specifically about the deli counter. You know, there's a whole etiquette that goes with... You know, there's multiple people waiting, what order do they get served in, things like that. But this isn't about that.

Aaron Saray (11:26):
Well, normally they have a number, right? You just take a number.

Joel Clermont (11:30):
The one I go to doesn't. But yes, I've seen that. It's more of like an old timey thing, isn't it?

Aaron Saray (11:35):
It's more when there's a bunch of people.

Joel Clermont (11:37):
Oh, yeah. Well, this is like a deli inside of a grocery store and they never have numbers and they always have multiple people waiting.

Aaron Saray (11:43):
Okay.

Joel Clermont (11:43):
Something for the suggestion box next time I'm there. I like it. But I was behind this older couple and they were ordering. And I just have to share with you what their order was and I want you to tell me what you think it means. The gentleman said, "I would like one slice of ham enough for two people." So while you're collecting yourself, I'll tell you what happened to me. I'm standing there with my wife and we both looked at each other, like, "Did we hear that correctly?" And he in fact repeated it because the person behind the counter had a little look of puzzlement on their face as well. So what do you think he wanted, Aaron?

Aaron Saray (12:31):
I'm going to say, I think he wants a really thick piece that they can cut in half and put half on each sandwich.

Joel Clermont (12:40):
Yeah, I don't know if this was destined for a sandwich or what was happening here. But, yeah, he ended up I think with maybe like a quarter inch thick slice of ham. But you know, just like in the context of everybody ordering their food and especially you are ordering deli meats, it's generally sliced or shaved and you say, "Look, I'll have a pound," or, "half a pound." But the specificity of what he said, "One slice of ham," right away, like? You just want a taste of it. You know, some people do that. Like, "Oh, can I try a small slice of the roast beef?" But just the way he phrased it and the certainty with which he expressed it, I just like... But that's what he got. He in fact got one slice of ham in a plastic bag and there were two of them so I assume they split it down the middle.

Aaron Saray (13:29):
Wow.
During our podcast, we got a notification that we sold a book. Would you also like to give us money?

Joel Clermont (13:38):
We love getting emails like that. Head over to masteringlaravel.io and you'll see our paid and free books. Take a look.

No Compromises, LLC