Episode 147
· 10:27
Welcome to No Compromises. A peek into the mind of two old web devs who have seen some things. This is Joel.
And this is Aaron. So, it's finally that time, Joel. It's time to do the AI episode. Are you ready?
You say it like there's only ever going to be one, because that seems unlikely.
Well, we haven't covered it yet. So, I think the best thing to do is to kind of just jump in right away and say, "Let's analyze different discrepancies between the claude.md file and the agents.md spec," right?
What?
Or, you don't think I should get that detailed about it?
No, I don't.
No. That makes sense because I wanted us to kind of bring up the point why we haven't talked about it so far.
I think it's time. I think it is.
The way my brain works and it's not how everyone's works. But it's the way my brain works, is that I want to learn something a little bit more in depth. I want to know details, I want to know kind of how do I use it, the whys, what are the risks, what are the things involved with it, as I kind of go and implement things.
And you can see that based off of our previous podcast topics where I talk about, "Hey, is there tests in your package?" And, "Do we even need a package?" And so, whenever I reach for a technology, whether it's code or overall technology, I want to know more about it first before I choose to pick it and implement it in part of my Stack.
And even more so, talk about it as an authority or as like you're sharing an opinion. But I appreciate what you're saying, but that is a pattern I've observed in you, and I appreciate it.
Well, and that's the thing. There's people at every different level of the cycle, but one of the things that we do is we kind of bring forward this authority, and in order to do that, you have to spend some time with it. So up until now, I would say I had a lot of background as a PHP and Laravel programmer.
I didn't have a lot of background as an AI implementer in my tool set, whether it's using AI as a service. So, I'm building an application, interacting with AI. Or, whether it's using AI as a tool chain or tool set to do my development.
So, because I haven't had as much experience maybe throughout with that, it's been pretty fast, right? That's kind of why I haven't really dug in and shared as much detail as you might expect.
Yeah, fair enough. Not all of us have nine years of ChatGPT experience, even though it's only been out for like three.
Right.
And I think probably another thing playing into this, too, Aaron, is just like how fast it's moving. Like, if you thought what was the old thing? Like JavaScript frameworks. Like, there's a new one every week.
Right.
I think AI is actually even more fast developing than that. Where you feel like you figure something out and there's a new model or there's a new technique. Or there's like, "You're using the wrong tool. That's the old thing. You should be using this now."
Well, right. You're right. You're exactly pointing and kind of hooking into this first point again with yours. Is like, there's so much changing, and it's going so fast, how do we build that deep knowledge?
And so, when I look at some of these things, again, it's like, how do I make my window of knowledge even larger so that I can distill this back into something that's useful and authoritative? So, I want to challenge you with something different.
Okay.
Because we're looking at AI and we're looking at all these different changes. And we say, "Well, there's a lot of changes, but I have a pretty reasonable idea of how this is working," you know? But I'll challenge you with one thing. When was Gandhi born?
The Indian statesman or?
Yeah.
Okay.
When was he born?
Like, the late 19th century? I don't know.
Which is around what time period?
Okay, I'm going to guess a year. 1884.
Pretty close.
Okay.
If you ask other people, they might... Because it seems like someone we should all know, they might say he was born in 1920s, 1930s. There's other people that might seem to think it's even sooner because there was movies about him, right?
Okay.
There's other people that might think India maybe is a large, rich continent and has a lot of history that we don't know about. Maybe that was even further behind, you know?
Okay.
Who knows how this kind of cycles into what you know about history? But the point is, when you ask someone the specific detail, "When was he born?" it's really easy to get it wrong.
Sure.
But if I back up and say like, "What's the time period?" You're actually really close. And so, that's the same sort of thing that I take a point of when we're looking at AI. Is like, yeah, there's a bunch of ways that I could answer your question right now.
How do I use x, y, z tool to do a b, c thing? But I want to back up a little bit and say, like, what are you actually asking though? Are you asking how do I just use an AI tool to do a task? Are you asking what is the task I even want to do?
And so, I think so many people are running towards, like, how do I make AI just do this quick thing for me? It's like, you have to understand what you're asking it to do first to even know if it was successful. And that requires backing up a little bit and taking a little bit larger picture.
Yeah. I mean, I think any task you have to know what the success criteria are, or else how do you know if you're doing it right? So, like this one is especially important, I think, to approach it that way. Because it is so much room for, I don't even want to say error, but room for like different ways of approaching it.
You know, I kind of changed my opinion, too. I'm not anti AI. You know, if you would've met me a couple years ago, you might've even had question it. "Are you anti-Livewire? Are you anti-Laravel?" Like, all these different things.
You can just say, "Am I an old man?" Like, you live in the cloud.
Right. Oh, I'm anti-hype.
Yeah.
Am I anti-hype? And so, I'm kind of going to wrap this up here a little bit with a thing that happened earlier last week or the week before. Someone in our community said, "Yeah, but you have a lot of skills and a lot of experience, you should be publishing your AI guidance."
Oh, yeah.
"You should be telling people that don't have as much experience what to do." I thought that was an interesting question or an interesting suggestion, but then I also realized that it's just not right for me yet.
It's getting closer, it's not right for me yet. That doesn't mean it's wrong for someone else, and that's where I have to apologize to Joel.
Okay.
Apologize publicly to Joel on our podcast.
All right, let's make sure this is recording. Okay, we got it. Let's go.
Okay. It's because as we're learning and as we're teaching and as we're evolving, we need people at every different cycle of the learning path. I mean, we need those first movers who are taking something and saying, "I get it, but here's another way to look at it. And here's another way." Who are just throwing out all the information."
"We need those middle movers who are saying, "I know something, and I picked one of the pieces of information and I've customized it, that it's really efficient for me, and this is how you can do that." And then I think we need those final, like larger slow movers who say, "I'm looking at all these different ways that you customize it, and here's some derivatives."
"And here's some ways that we take all of that and reformulate it maybe as the next version." How we can teach the next iteration that it isn't so more chaotic anymore, and then we can kind of reformulate that. So, I really actually appreciated that challenge.
And I want to apologize to you, Joel, that maybe I have one way of formulating information, and you have a different way. Maybe you're that middle ground guy, and maybe I'm the slower mover, so I shouldn't have been holding you back from publishing some of this stuff. So, I'm sorry.
Okay. I didn't feel held back, but I appreciate it all the same.
Right.
And, I think the takeaway of what I'm hearing you say, Aaron, is this is certainly is a topic doesn't feel like it's going to go away. And I think there really is some significant benefit to paying attention to this.
So, wherever you are in that hype curve or interest curve, it's worth paying attention to. And I think for us, through this podcast, and other forums, we will start sharing more what we've learned and what we found to be beneficial as we go.
Went over to my friend's house the other day, and she made chicken soup, and it was great. It was homemade, I loved it. I was there for the whole entire process, moral support, you know.
Okay.
Or, bothering her in the kitchen while she was cooking and I was throwing stuff at... whatever. Who knows what I was doing? Doesn't matter. So, we had this chicken, and there was like a small piece of bone, or not even a piece, I don't even know what it was.
It was like a hard piece of something, but it was like flat. It was like a cross between plastic and bone, and it was charred. So, it was just like a weird... and it just got stuck in the back of my tooth, and I pulled it out, and I was like, "Eh."
I didn't really care, but it didn't really look like any... whatever. But I set it down on my plate and just finished the food and had seconds. You know, I should do. Everything was fine.
And then of course, because I've already told you how I act with my friend, I grabbed a little piece of stuff off my plate, put, took my stuff to the sink, and she was standing there, and I just put that in her hand and said, "Here is a gift. Surprise."
Here you go.
And she's like, "Oh, what is that?" I said, "Well, that was in the soup."
Mm-hmm.
Turns out, I don't know if I made a mistake, but I think I might've. I need your help here.
I think so, maybe.
Because for the rest of the night, all she could do is worry about what else might have been in the soup. I was like, "It's not a big deal. I'm just messing with you. I don't care, it was a great soup. Thank you for the food."
And it came about six, seven times. So my question is, when you find something in the soup, what do you do? Was it the way I presented it or was it that I presented it at all? Because, mind you, there was nothing else I found in the soup.
You know what they say, like when you want to give some constructive feedback, you have to like sandwich it with good things first. So, maybe you should have been like, "Man, this soup was awesome. Here you go. I really enjoyed that soup."
It was kind of nice to have that little challenge inside of our community. And there's a lot of people also sharing about AI there. That actually brings up a really good point, there's something else we need to share.
Aaron, yes, I think it's time I will start sharing. By the time this comes out, there will be at least one AI tip in the Mastering Laravel newsletter. So, if you would like to see that, why don't you head over to masteringlaravel.io and make sure you're on the mailing list.
Listen to No Compromises using one of many popular podcasting apps or directories.