Episode Transcript
[00:00:00] Speaker A: Welcome to Within WordPress, the podcast that features all those people inside the WordPress community. Some are seen, some are not. Well, less seen. With us today is Christos, and possibly a little less seen, but I most certainly have seen you quite a few times. Welcome, please introduce yourself.
[00:00:27] Speaker B: Hi, Remk and hi everyone else. My name is Christos. I work at Presidium as a performance engineer and I've been in the WordPress scene seriously, I guess for just about three years. Aside from, you know, small projects that I've had throughout my 12 year career.
[00:00:43] Speaker A: In IT, what is, what does serious mean? Because that means there's an unserious phase before that.
[00:00:49] Speaker B: Yeah. So WordPress was always my go to thing for when I wanted to make a site for a friend or maybe, you know, those kind of jobs you pick up when you're still trying to find where you fit in in the IT realm and you're like, yeah, I can do that. And I can do that too. You become a yes man of sorts.
[00:01:04] Speaker A: Yeah.
[00:01:05] Speaker B: You don't end up doing a project very well at the end of the day, you just kind of do it for someone. WordPress was my go to because it was kind of a default thing from school.
[00:01:17] Speaker A: That's interesting approach, but yeah, from school I don't hear that much.
[00:01:22] Speaker B: Yeah, I am. I was born and raised in the states. Burbank, Illinois, outside of Chicago.
[00:01:27] Speaker A: Yep.
[00:01:28] Speaker B: So we had like electronics one, electronics two, and then a computer oriented class and then a network oriented class. They were elective classes in high school. And the only thing they taught us was WordPress. I can't recall the version, but it's not that we really learned WordPress, not like the things that the community or the group.
Yeah. So I just knew what the word WordPress was in my head and I would say, okay, cool, let's make a site. What do I use? WordPress.
[00:01:59] Speaker A: That's pretty much how I think everybody that I know started to learn WordPress at one point or another. Yeah, I mean, pretty similar for me. I needed a website, had one, created it on Joomla. And then figured, yeah, this is not my platform. How do I migrate it away to something that's a little less convoluted? And yeah, just play it around.
[00:02:20] Speaker B: That's pretty common. I hear a lot about like people going from Joomla to WordPress.
[00:02:25] Speaker A: Oh, that was for, I would say in the early 2000s, that was the thing to do because you, you couldn't explain it to client clients. They'd be like, yeah, I just heard your explanation, but I, I just still don't get it. Where do I do this? Where do I add that? It just became just too complex, I guess.
But how, how did you, how did you make the switch into sort of playing with it and then becoming a little bit more serious into it?
[00:02:54] Speaker B: No, it has like 100 to do with starting at Presidium that does manage WordPress hosting. So I had ended up being full WordPress. Like if I got a different job, I probably wouldn't have been so engaged with WordPress, you know, as I am for the past three years.
[00:03:14] Speaker A: So explain a little bit more the kind of stuff that you do because I know you have a high focus on performance and I think we are very much similar in that. But explain a bit more. What is the thing you do?
[00:03:31] Speaker B: Well, initially we started with load testing specifically for WordPress websites. That would eventually turn into capacity planning, which is kind of one of the same. We're using load testing to do capacity planning. So saying I have an eShop and I'm expecting X amount of concurrent users on my site using technologies like JMeter. In the beginning I'm transitioning or trying to at least transition to K6 because JMeter is old. It's good, it's there, it's good, it's steady, but it kind of holds you back to what you could do for.
[00:04:07] Speaker A: I know what it is, but most people probably don't know what K6 is. Can you explain a little bit what that is?
[00:04:14] Speaker B: Yeah, it's programming oriented. Yeah. You could write like your routine. So test routines are just like scripts that's try simulating real users actually visiting the website, you know like moving around, clicking things, pushing buttons. This is all like protocol based testing. So it doesn't really have to do with the user's actual experience of a page loading. Like Core web vitals covers things like what it looks like while it's loading. And performance is a bit different and I'm not to say skewed, but on the back end or the protocol oriented.
[00:04:47] Speaker A: Stuff, I'll say it's skewed because there's no rum in there.
[00:04:52] Speaker B: Okay, so this is controversial already. We just started. I will agree that the metrics for the front end stuff, they're based off theories that people just, you know, thought have to do with how performance is observed from an end user.
But the protocol based stuff is definitely technical. Like it's just requests and timings. Requests and timings.
[00:05:17] Speaker A: I think also Kevin Ohashi uses it for review signal like the benchmark testing for all These different types of WordPress hosting that he does yearly. I think he also uses K6, I.
[00:05:29] Speaker B: Don'T remember probably like K6 Cloud. So the K6 Cloud is a different solution. It's all hosted for you. You don't have to sit and write the scripts. There's a UI and everything.
It is paid rather than so cases is open source by itself. But K6 Cloud is the paid version.
[00:05:48] Speaker A: I don't know which version he uses, but yeah, K6 it's a good tool. Don't get me wrong, I wasn't trying to be argument of there but I think the great thing about testing is understanding that what you're testing is usually a lab test and not a real user experience test, which is very difficult to pinpoint. Where does one thing start and where does the next where everyone like performance.
[00:06:18] Speaker B: Of the distribution of like metrics. You have to look into everything so you can kind of figure out what you're doing. But on the scope of like capacity planning, that's easy. That's easy. It's not easy, it's simple. So there's a misconception about the words easy and simple. Simple things aren't necessarily easy. Right. So like writing assembly, super simple, not easy.
With capacity planning you just have to. You literally make the request. So anything protocol based is good. Using something like JMeter or K6 will be suffice to know where your site has maybe to identify bottleneck or maybe to test your hosting provider to see if they can handle the traffic or your caching solution or whatever you have going.
[00:07:00] Speaker A: Yeah. Can you explain a bit more about the protocols? Like what. What would be typical protocols you would employ in your testing protocols like HTTP 2? Yeah, I mean I'm sure there are more process.
[00:07:16] Speaker B: Cool. This is the fun part. This is where you the. The. It all starts off with communication. I mean it should be based around communication. So it's a communication with a client to see what's going on on their website. Like you were telling me you have an eshop and you expect users like a huge flood of users on Black Friday, which was just a couple of weeks ago.
What do your users do would be like my first question, could we kind of map out do you have some heat maps? Do you have something ready? Do they even go to your homepage? Because most people optimize homepages but most people don't get directed to a homepage. There's usually a pricing page or they go to a login page.
I want to get that information from the client. If we can have good communication. If I can really gauge like what you users, real users do on the website, then the test routine, that's like the kind of script that is played over and over by the testing engine, let's say JMeter.
The results will be more conclusive. They're never going to be 100%. But the first step is definitely identifying what your real users do on your website.
Yeah, if we can get that, go ahead.
[00:08:28] Speaker A: And I was gonna say, I think most people just highly underestimate this particular point.
The what are users actually doing? I love that you highlight that the vast majority of certainly campaigns are not going to hit your front page. They're gonna hit your product page, they're gonna hit your pricing page, they're gonna hit your hell, even the, the login and all that sort of stuff. And all of those are.
Well, if they're coming through a tracking URL, by pure definition it's already on cache. So you're, you're testing bare metal.
But for those that just manage to solve that part, as soon as you then see an article or a product that you like and you hit it to cart, the rest of your session is uncached. How do you, how do you solve that? Because that's a immediately a performance Hit those types of things and I love that you've mentioned them already, those types of things are general.
Where do I start fixing performance on the caching layer specifically. But there's so much that just, that just comes from that principle of understanding how traffic actually moves on your site, in what capacity, in what frequency, all that sort of stuff. Love it.
[00:09:50] Speaker B: Well, okay, I could see why, where you could overlook it. But to fix something you have to know what's wrong. Right?
So the first part is definitely exploratory.
What would the next step be? Then it gets easy once you get through the whole human communication part. Because that's the most difficult part in it actually talking to computers. That's why ChatGPT and all these models, they're going to have a hard time understanding humans once that's out of the way, actually generating a script and putting some pseudo random behavior in the script, trying to ramp up concurrent users and then get back all the results like the response times. They're all response times. At the end of the day it could be network latency. It could be the whole request. It could be time to first byte, which is like the very first response you get back for making a request, clicking on something or typing something in and pushing enter.
That part's the Easy part. So let's say we figured out what's going on, we made the test routine and then we run it. But it's not that simple during the Reddit part because lots of things happen in between. I've never ran a single test plan once.
It's not a thing.
The test plans I generally work with are around maybe 10, 20, 30 minutes long. Anything longer would be an endurance kind of test.
So in the realm of testing, there's lots of words that people throw around like endurance testing or spike testing, stress testing, testing. It's all kind of the same. The only thing that really changes is how you apply the simulated users. Do they come in a burst? Do they come in multiple bursts? Is it just like a ramp up? And that has to do with what you want to figure out. So in capacity planning, I prefer doing soak testing, concurrency soak testing, which means we just ramp up at a steady rate, maybe with like steps of like let's throw five users at it, maybe with a direct line going up. I do prefer the steps though, because then you can see how the systems respond over. Let's say you add five users and then you wait two minutes. During that two minutes you might see the first like couple of seconds. After adding maybe five users or maybe 10 intensive tests, you'll see a spike in the response time. Then it's up to your, I don't know, hosting solution, hosting provider, caching engine, all that like backend heavy stuff to do something with that traffic. Like the initial spike is fine, but if it stays up there, if it keeps going up after adding some users, then you've got a problem. If you have an initial spike and then it balances out, you're all good.
So we do that over and over and over and over by adding users, engaging the results at some point you're going to see and ever raising response time.
[00:12:50] Speaker A: That's usually that one is inevitable. @ some point it's going to be there.
Hopefully.
[00:12:59] Speaker B: Yeah, well, yeah, it depends on the every situation too. But when you see that, that's when you know that, okay, cool, that's your max.
Doing that and repeating that over and over, you can come to some sort of conclusion.
It's never 100%, but you can come to some sort of conclusion that hey buddy, we just did flow testing or stress testing, call it whatever you want and that's your max with your current hosting solution, with your current website, with your current code base.
[00:13:32] Speaker A: And then where do you start from there.
[00:13:36] Speaker B: I like.
[00:13:37] Speaker A: Yeah, that is the fun part. The you, you mentioned the, the little spike going up and then it should stabilize.
I have a fun, A fun example in that where in a sort of stress test, I, I came into a situation where the exact same thing happened. Kept adding users and then a certain point where you would expect for stuff to stabilize because the load wasn't that high yet. We saw the huge increase in resource happening on the server and like, what's going on here? Because there's nothing that, you know, aggressive happening. Turns out that the way that particular site was configured, there was a caching plugin that upon request, generated everything it thought it needed to cache and turn into a static HTML version of it. And just the sheer amount of pages on that site meant that that caching plugin did that over and over and over, compounding on compound and just wild.
[00:14:52] Speaker B: It was crazy.
[00:14:54] Speaker A: Basically. Ddosing itself to the point of like, how did somebody come. Okay, let's fix this. But you're, you're, you're mentioning the, the little spike and then it should calm down. Calm down. Reminded me of that one. That was a fun, fun one to, to discover. Okay, it's, it's not that complicated to fix this one.
[00:15:16] Speaker B: Yeah. What did you do? You added preloading stuff?
[00:15:20] Speaker A: First I removed the, the caching solution altogether and then just wanted to see what bare metal did. And then from there we started implementing whatever we needed to cache because the way that cache worked was just not making any sense in the grand scheme of the whole server and ultimately went for an entirely different internal caching. So lots of transients were added to cache, certain queries and stuff. That was the heavy part in terms of what the database needed to produce. And the rest was just using a smarter caching engine.
[00:15:56] Speaker B: Caching is complicated.
[00:15:58] Speaker A: Oh, there's so many layers. There's so many layers. Yeah.
[00:16:02] Speaker B: And the caching is like. I don't remember where I read this, it's not my own words, but caching is an amazing way to cover up a bottleneck and, you know, not be able to identify it.
[00:16:14] Speaker A: One of the things I frequently say on social media, just to remind people that caching doesn't solve your performance issue, it hides it. And if you're doing it correctly, then you may get away with it. Like, you'll never see the downside of it, but the vast majority of sites that need performance and that just solely rely on, you know, whether you're using wp, Rocket, Nitropack, Varnish, Cloudflare, apo, any of those solutions, they're great in what they do and all have their pros and cons and all that whatnot. But the, the moment you are assuming that every single hundred hits are all going to see your site cached, it's. It's an illusion. It doesn't happen and you can't build a strategy on that. So caching does not solve performance, period.
[00:17:12] Speaker B: It can't be at 100%. It was really configurable, like system level caching. You could probably go up to 90 plus percent even with that example that you gave before. So that like what you said before for your campaigns, they have, you know, like a different link for a personalized link for every person.
[00:17:29] Speaker A: Yeah.
[00:17:30] Speaker B: Clicks on it. You can use Varnish, which is definitely a personal favorite to create. They have something called vcl. So it's like a programming language for the cache on Varnish and you can use it to like remember a certain pattern of query parameters.
[00:17:52] Speaker A: Yeah.
[00:17:54] Speaker B: Which. Well, this is like hardcore caching.
[00:17:56] Speaker A: Yeah, yeah. For the, for those listening and then just go like, what are these two rambling about? There's, there's. If you, if you click a link, let's say you're sending an email from, from mailchimp and you check that link as it appears in the browser address bar. You see there's a whole bunch of UTM tracking things or mailchimp specific, like a whole bunch of strings following the actual URL that you were using. And it is the stuff that is following the URL that is essentially deal breaker for cache. Yeah. So it starts with a question mark.
[00:18:35] Speaker B: Right.
[00:18:37] Speaker A: Anything else is essentially meaning, hey, your site is no longer cached. And what Christos and I are referring to is that there is a way to sort of cache that, but certainly not out of the box. And you need to, you know, do at whatever level, but you need to do certain things for certain patterns to be recognized as, oh, this isn't always on cache that we should actually cache this once the first one has hit.
I'm assuming Varnish. It's been a long time that I played with Varnish. It's not one of my favorites, is it? I, I assume it's still warming up the cache that needs to happen. That one hit. Right. So the first one who clicks it just has a bad experience and pretty much anybody else is then seeing a fully cached version of such a.
[00:19:21] Speaker B: That's one way it doesn't necessarily have to wait for the first hit. You can tell it to preload it.
[00:19:27] Speaker A: Yeah.
[00:19:28] Speaker B: Okay. Right. And then it kind of has to do with like which cache. So you mentioned Layers Varnish itself. You could split up into multiple layers Varnish. For anyone that doesn't know what it is, it's basically a web server and it sits on the server itself and it should be as close to the network edge as far as your systems go. Not like Cloudflare stuff, not like hosting full HTML or full site on the cloud's edge.
After that, inside your provider. If you have like, I don't know, multiple servers or some sort of end tier architecture, which means that you have like a modular approach with multiple machines doing different jobs, it would be closest to the network edge of your provider and your provider at a high level too.
Not necessarily Presidium, but some Presidium upstream provider or.
[00:20:24] Speaker A: This is.
[00:20:26] Speaker B: Right.
It has to do with the actual server itself. But anyway, I kind of got lost there. Or was I?
Yeah, right.
You can host Varnish in memory. That's an option. You can host Varnish on disk. You could do a combination of both. So then you have your hot cache in memory and memory is hyper fast. It's not going to get faster than memory, not a systems level.
And then you can have your warm, not hot on an SSD on that particular server that's closest to the network edge of your provider. That's a mouthful.
[00:21:03] Speaker A: Yeah.
Just to try to recap that. So what it essentially allows you to do, Varnish is cache as local to your actual web server as possible before it hits the. What we call the edge. And edge caching as a. As the most common example is Cloudflare edge caching. I think possibly they even invented the term. The Cloudflare would sit fully in front. Varnish would be in front of the actual web server itself, but then extremely close proximity of the actual web server on which your WordPress installation is installed.
[00:21:46] Speaker B: Yeah. So if you did do the Cloudflare kind of edge caching, then. Yeah, it would go Cloudflare, Varnish and then Nginx or Apache, whatever you use as a web server.
[00:21:57] Speaker A: Yeah.
Do you see scenarios where it makes sense to have both Varnish and Cloudflare?
[00:22:07] Speaker B: Yeah, actually. Okay, so this is trivial because Cloudflare itself is like super configurable.
[00:22:13] Speaker A: Yeah, yeah, quite.
[00:22:15] Speaker B: Things do change very often with Cloudflare, but I'm going to say yes either way. So if you have your Cloudflare set up as a basic configuration, you didn't do anything other than just, you know, put your domain in Cloudflare and enable caching. You could rely on Varnish to do the heavy lifting before things get to your Web server. And then Cloudflare itself can update itself from what's served from Varnish, but it.
[00:22:45] Speaker A: Wouldn'T actually serve the full static HTML. Right. So that stays in Varnish.
[00:22:53] Speaker B: You could configure Carlos to do that. If I remember correctly, you could replace. Okay, so I guess you could replace Varnish with Cloudflare.
I do. So I'm a systems kind of guy, though I don't know if I would do that personally because I would want to log into the shell and see what the VCL does, maybe configure it differently. So flexibility, you kind of. I don't want to say you lose on flexibility because cloudsplare itself is very flexible.
[00:23:23] Speaker A: I think that the reason I'm asking you, because this is an interesting discussion, because at a certain point for certain scenarios, it becomes advantageous to think of Varnish as the solution.
I've seen situations where I go makes total sense to use Varnish here, but I've also seen the other one where it just totally makes sense to rely on the Cloudflare layer, for instance. So because it is in front, acting on as close to the DNS level as it possibly can, but also with I don't know how many data centers they have globally now, it's an easy way to reduce latency for requests that can be fully HTML like, especially if.
[00:24:07] Speaker B: You have some sort of business. Go ahead.
[00:24:09] Speaker A: Yeah, I was going to say, I live in the Netherlands and obviously if, if I present my site from. Hosted from the Netherlands, there is going to be latency for anybody from Australia, New Zealand, Japan to look at my site. That's just physics.
And in those scenarios Cloudflare makes sense. And then, yeah, you lose something there in configurability because obviously with Varnish you have a better control.
[00:24:33] Speaker B: I'm sorry, because I was just going to touch on that. I was just going to say that if your user base is located in, for your example, the Netherlands, and you know that there are. If you have like a site that's some sort of public service, for example, and everyone that's going to be accessing your site is probably everyone anyway, Most people accessing your site are going to be from the Netherlands, then I wouldn't use Cloudflare. I wouldn't. Because it's an expense too. Right. At some point you start, you have a client, right? Your client's not going to want the most expensive option that's going to be available around the world when they don't need it to be.
[00:25:08] Speaker A: It makes no sense to invest in that direction if you get like what, 100 visitors a month from outside of your country.
I have clients like that where it just, there's just no visits because it's in Dutch or even, even, even more specific in Frisian for instance, there's like there's 500000 of us and we all live around where I live now. So we're not even going to get. We're going to get maybe a dozen hits on a monthly basis outside of this region.
Don't make sense.
[00:25:37] Speaker B: And at that point it's your job to bring up that like a solution. You have to be like, oh wait, you don't need Cloudflare, you could use this.
[00:25:44] Speaker A: Yep.
[00:25:44] Speaker B: Yeah, I don't know if everyone does that.
You know most people have a stack, like a ready made stack. Like this is what I work with. I work with Cloudflare, I work with X provider or my own bare metal and my own Bache server.
Lots of self hosting going on with Cloudflare in front.
Okay, so what do I say that before Presidium I did lots of work on Upwork and I would find clients for maybe WordPress oriented optimizations and the site I would find are all. Were all a single server setup. Maybe with a cPanel. Usually with the cPanel, which I personally am this guy. I hate cPanel. Not just cPanel. I hate Plesk. I hate cPanel. I think, I really don't think you need them.
Yeah, direct set up in front of that.
So it was always like a single server setup, cloudflare and that's it, you don't need that.
[00:26:39] Speaker A: Yeah, I think that's smart. That's a smart approach.
So getting back to.
Because we've derailed entirely, my original question to you, did we? Well, yes or no. But you do the test scenarios, you come up with a solution of what that specific configurable setting should be for that particular client in that particular scenario. What are your next steps from there? How do you proceed?
[00:27:12] Speaker B: Well, as far as capacity planning, we're done.
If you just want to migrate your site onto X provider, if you want to go into optimization, we're going to pick that point where in the little graph response times keep going up and it would be great if aside from the graphs with the response time, the client side data, like the stuff that someone would probably see from their end, it would be great if you had access to the server side stuff. Resource usage, CPU usage, RAM database stuff. If you can have like a detailed. If your provider has access Maybe. Or maybe you directly if you're self hosting to very detailed like database monitoring, like the Percona setup. Yeah.
Then you can get busy like with the nitty gritty stuff. Every situation is completely different. I would definitely look into caching first because like you said, it's where most problems lay. At some caching layer.
[00:28:12] Speaker A: Yeah. Do you include things like Redis?
[00:28:17] Speaker B: Yes.
Well, there's memcached and there's Redis. Memcached is kind of a little bit older of a solution. It's not necessarily. Anyway, I'm more Redis oriented. Yes. When it comes to object caching.
So object caching. We mentioned layers before.
[00:28:33] Speaker A: Yeah. And the. Yeah, I'm mumbling what I was going to say. I did a podcast with Til Kruse, which I think is the third one I recorded where we were till and I talked about Object Cache and Object Cache Pro specifically and its connection to Redis and how it optimizes databases and whatnot. So anybody listening curious about very, very deep into Redis conversation? Check out that episode. And the reason I'm asking you is because Redis is for a very large crowd, a very mysterious layer that's doing something with the database.
How would you best describe what it does?
[00:29:24] Speaker B: I would try to explain what the layers like in caching for hosting a website are. Like we mentioned the edge side caching that's closer to the person trying to access the website. Cloudflare, maybe.
Then there's the caching engine that could be running on your provider, which might be virunish. It might be Apache, CGI or CGI integrated with NGINX works too. And then there's the requests that go that aren't cached. You know, the ones that we mentioned before, they might have a weird query string, some question mark and some strings after that that has to go to your web server. When something goes to your web server, that means that Apache has to accept that request. Apache has to trigger PHP PHP runs.
I don't have to explain what PHP is. Okay. Everyone uses php, I guess most things when PHP runs, PHP might need to talk to the database.
There is a bit of caching that could happen at the PHP layer too, like op cache, PHP stuff. But I'm not going to get into that because that's really weird and complicated.
Definitely per case too.
[00:30:34] Speaker A: Yeah.
[00:30:35] Speaker B: After that. So there's that part that we mentioned now with PHP talking to the database that's after Apache or nginx, whatever you use as a web server, there's a caching layer that could fit in there too, called the object cache.
So there's like some. A bit of software that talks to another database of sorts called Redis or maybe memcache. The Redis. What is Redis? Redis is a super simple. Remember, simple doesn't mean the easy. A super simple database. And it's a time series database. I can say it like that, right? So it's got a.
It could be used as a time series. It's not necessarily all its records are.
Think of an Excel sheet with just two columns, right? So you have like a key on one side and on the other side you have a value key, value pairs. That's one of the secrets behind why Redis or other databases that function that way are extremely fast. Like everyone knows. Oh, everyone. Most people, especially people that are in IT and do this kind of work, realize that Redis is capable of handling billions of requests per second.
[00:31:48] Speaker A: Yeah, yeah, It's.
[00:31:50] Speaker B: It's very like. Go ahead.
[00:31:53] Speaker A: Yeah, I was going to say it's wild what it can do once it's pushed.
[00:31:58] Speaker B: So this caching layer, this object cache software thing that integrates with this weird database with just two columns, it can act as a middleman between requests that are made to the database. Like requests that are made often to the database or similar requests that are made to the database. And Instead of making PHP, talk to MySQL or MariaDB, which in nature it's an RDS relationship. Relational database. It's going to relational database. What does that mean? That you have one Excel sheet that's connected to another Excel sheet that's connected to another Excel sheet that's connected to another Excel sheet.
Naturally that's going to take resources. So like CPU, right? Maybe Rev 2, but CPU. Now, with this middleman here, if you've set it up correctly and if your site. Very important, if your site supports it well enough, you can take those requests, store them in Redis, and then instead of having PHP talk to the database every time it has a request, especially if it's a similar request again over and over, it can just talk to Redis.
So that segue is going to be.
[00:33:04] Speaker A: A lot faster, bypassing the whole bunch of resources it would normally need.
[00:33:10] Speaker B: Yeah, I like that word. That's a great word. So bypassing caching is. Bypassing. Caching is accessing something saved somewhere. Bypassing something else. Yeah.
[00:33:23] Speaker A: I'm trying to think of the.
There's. And I'm typing this in now to see if I get a good hit, but there's the, The. The Origin of caching, which is not necessarily to cache. It was meant to be.
Let me see what they say here. Reducing the time it took to basically improve the system performance from a very different perspective. So they were just looking to solve certain things they saw happening. They weren't looking at, we want to make this faster. They were thinking, how can we make it smarter? Like, how do we bypass certain things we keep doing over and over again? So it's. It was more of a let's not repeat ourselves where we don't have to.
And the. The concept of caching as we see it now is an entirely different beast.
I kind of like how we just randomly got into something that was solving a particular issue and then now it's a whole concept on all these layers that you just wonderfully explained.
[00:34:29] Speaker B: I'm really happy I just grabbed that light that just fell and it didn't hit the ground.
[00:34:33] Speaker A: I saw that. I saw that.
In this case, if you're listening to us on audio, you should want to check out anywhere around minute 33, 34, because the video is interesting. Yeah, but.
[00:34:49] Speaker B: Yeah, exactly what you said. So caching was probably. I don't really know the history of caching, not going to lie, but it was probably generated to solve issues like everything else in it is. But by solving issues and making more things that solve issues, you generate new problems.
[00:35:04] Speaker A: Yeah. Yeah.
[00:35:05] Speaker B: We didn't talk about invalidation at all.
[00:35:08] Speaker A: Yeah.
[00:35:08] Speaker B: What happens when you have a post? Everyone out there probably has. Everyone out there that has a WordPress website probably has a post somewhere.
What happens when you update a post?
So if you have your stored version at some layer of the cache and you've updated your post, you've probably maybe had a visitor, customer, maybe a co worker, saying the update wasn't made. I can't see the update.
[00:35:35] Speaker A: I don't see it. My browser doesn't show it.
[00:35:38] Speaker B: Maybe it's silly. Like you misspelled the word before and it's before.
Before. It's still spelled wrong. It's still spelled wrong. It's still spelled wrong. Okay.
In cache, invalidation is when you say we have to delete that cache version, that stored version of your website, that actual HTML, at whatever layer it's cached, if not all layers, ideally all layers.
So that way the new version can be stored at that point, unless you have some sort of automatic mechanism that both deletes it from the cache and puts the new version on the cache, the request has to go to the back end at least once. At least once. It could be multiple times too, but at least once.
What happens if you did do cache invalidation? So plugins do this by default where they have a hook in WordPress that say delete the cache. Or WordPress, a hook is used, telling the WP Rocket, for example, that you mentioned before, for their cache, which is on the actual file system, to delete that single page.
Right. And then you have your user hitting refresh. They're like, no, it still hasn't changed.
[00:36:51] Speaker A: What's going on?
[00:36:52] Speaker B: We forgot the. We forgot one of the layers that's out of your control.
[00:36:56] Speaker A: Yeah, the most important one and the one you will hear the most clients complain about.
[00:37:03] Speaker B: Do you want to introduce it?
[00:37:04] Speaker A: Browser cache?
[00:37:06] Speaker B: Yeah.
[00:37:10] Speaker A: I literally had a conversation like this this morning. I don't see the change. Well, if you hard refresh, you will.
[00:37:17] Speaker B: What? Hard refresh?
[00:37:20] Speaker A: Yeah. So if you open up your browser and hit the inspect, then you get the option to hit the refresh button again with the right click and then it has a hard refresh option. It's hidden. But what it essentially says to the browser. Now that I add this special request to this URL, I want you to look at bypass your own browser caching. So whatever's happening, and for the record, I think this is mostly happening on Chromium based browsers because they seem to cache browser cache longer and more persistent. But it basically says bypass that, go back to the server, see if there's an updated version, and only then, only then will you then see the change that you did on the server.
[00:38:10] Speaker B: Hopefully, yeah, that or you like the.
A private browsing session. Yeah, yeah, sometimes that's easier. But then you have to explain to the client why does it work in the private purple Firefox browsing session and not in the regular one?
[00:38:30] Speaker A: This is hilarious. I literally had this conversation this morning with a client where he went like, we've changed this. What is going on? And I tested it on my mobile as well. Yeah, your mobile phone also has caching. And not only that, it actually is worse because your mobile provider has caching as well. Most people don't know that, but it's right there.
Anything they can do to speed up. So certainly style sheets and stuff like that. They will cache messy. Oh, it's horrible. It's horrible.
[00:39:05] Speaker B: Okay, then what, what do we do there to, I don't know, maybe address that issue? There's always this thing called the ttl, the how long something is kept in the cache until the request is allowed to go to the next layer. The back end.
[00:39:23] Speaker A: Yeah. What do you.
[00:39:28] Speaker B: Don't ask me if I have a default etl. I will not answer.
[00:39:33] Speaker A: No, there isn't. But you can. You can still have a preference though. But no, no.
In some cases it's four hours. It's too long already.
Yeah.
[00:39:43] Speaker B: So what's long and what's short for ttl? It's so trivial.
[00:39:48] Speaker A: Can you give an example where it makes sense to have a long ttl?
[00:39:53] Speaker B: Okay. When you are. What could be a static site, it could just be, you know, like a simple five page site that people set up.
You want that to be hyper fast and it's not going to change.
I guess it's not necessarily a static site if it's using WordPress because everything's dynamically generated by the backend. But by static site, I mean, in essence, nothing changes. There's no login, there's no customization. It's not going to say, oh, I see that you're visiting from Greece. It's going to just show you a website.
[00:40:26] Speaker A: It's the context. Right. Of the site.
If it's an E commerce site or a highly visited site or a highly commented site. All of these things. It just doesn't make a lot of sense to cash into to infinity, I think. What is the longest? A year, I think. Right.
You can set the cache.
I think so. I'm not.
Potentially, you could. I'm curious, longer. Would that be weird though? Yeah. I cached this somewhere two years ago.
[00:40:58] Speaker B: You could you imagine like the site being cached on a browser for two years but the site not being up anymore.
[00:41:08] Speaker A: I can if, if set correctly, that that sort of stuff happens or set incorrectly. However you want to look at the, the predicament you find yourself in. But you know, I've been doing performance optimization in various different layers and levels and whatnot. And this is one of the examples that I've come across where people unknowingly checked on a few settings where browser cache in total became one year.
How do you fix that? Because everybody has watched your site is in their browser cache. So what are you going to do? Ask all your visitors to refresh? That's. That's undoable. So how do you solve that one? I mean that's a headache. But you'll find those because people do these things here. Sounds good. Let me just turn that on. Here we go.
[00:42:06] Speaker B: Well, most plugin providers, they have like a warning, you know, like don't. I know perfmatters has their like advanced settings that literally says don't mess with this unless you know what you're doing?
[00:42:15] Speaker A: Yep, yep.
[00:42:16] Speaker B: Don't remember the wording, but it's something like that.
[00:42:18] Speaker A: But you and I both know people will do this.
[00:42:23] Speaker B: Yeah. Curiosity is out there. It's a good way to learn things. By screwing up.
[00:42:29] Speaker A: Yeah.
I suppose that's a way to figure out how to deal with it. But how do you solve problems like that? I mean, do you run into these types of issues as well? I mean this is a layer of.
[00:42:45] Speaker B: Cash that is something that serious. No, something that series has never like crossed my radar.
[00:42:55] Speaker A: Good, good.
[00:42:55] Speaker B: Serious something that's dumb. Something that's silly. I don't know. So I don't do like caching plugins either. I don't want to sound mean or like. I do support certain caching plugins like WP Rocket is great. Perf mattress. Love it. Highly configurable. WP Rocket. Super easy to use for the vast majority of people.
I think I'm in the. Just stick to those two for like personal preference.
I don't like it though because it kind of has a single point of failure. You're going to be limited by the speed of the disk from your provider. You're always. It's going to be. It's always going to be limited by that because all the files are the cache, the actual cache, which in this case is not in some sort of database like Redis. Not on Varnish. It's literally a file or a series of files that are on the disk. So you're going to be limited by the disk. And what if it's not? What if it's like. What if it's not just one single SSD disk? And what if it's some sort of.
[00:44:01] Speaker A: Stuff that replicates from left to right network stuff.
[00:44:04] Speaker B: Right. What if it's some shared file system which is good, which is great. You need it for replication and it shouldn't be applied to everything. I guess maybe not database stuff in every situation.
[00:44:15] Speaker A: So how do you solve that?
[00:44:19] Speaker B: By not like not having to use plugins.
By using system. Oh, by using the system level. By using Varnish for example. Or by using Apache, CGI or nginx.
[00:44:34] Speaker A: I like. I like nginx happening you can do there.
[00:44:38] Speaker B: Most people do. I kind of have a bias here. Like I when my first job like based in IT in Greece we used Apache for everything.
So I don't know like I do accept that NGINX is like more popular especially around performance and like performance oriented people. It's a bit. It's a lot more modular too I'd say but Apache feels like home.
[00:45:02] Speaker A: I get that. I get that.
[00:45:05] Speaker B: I can't deny though like if I wanted to set something up that's a bit more complicated or there's something that I want more flexibility rather I would probably use nginx. Apache would be my go to though.
[00:45:17] Speaker A: Yeah I like the scenario where you can have NGINX be the web server and you proxy Apache in front for just ease to use.
I love having HT access to my availability and not having to do weird stuff.
[00:45:34] Speaker B: The first time I saw that I didn't know what was happening the whole use nginx is a reverse proxy I was so confused.
[00:45:44] Speaker A: It's a concept you need to wrap your head around right?
So let me ask you another question because this is We've done quite a bit of a deep dive in the caching caching layers and various ways of doing certain versions of caching at specific layers and what layers to avoid for others.
When you have figured out what your feedback to the client is based on the results of your testing and your knowledge of what his or her site is doing, what is your next step in terms of implementation? How do you how do you start such a thing?
[00:46:32] Speaker B: So I've identified whatever bottleneck there is. I I would try a solution like let's say it's the most common thing that I encounter especially at Presidium is an E commerce store and LMS store. LMS store doing LMS site.
They work exactly. I guess they're both stores anyway trivial they encounter issues with logged in users like we have a performance audit service going on now and the one that I'm currently working on is an LMS store. It's Boeing. There it goes again. Is an LMS site and his issue is logged in users which makes sense. It's obvious and I'm actually waiting on so you mentioned till before I'm waiting until I've got a call with till later tonight maybe in the evening so I can get my hands on the Pro version of the object cache because I don't think we gave enough credit to the plugin before because there are lots of solutions that talk to redis object cache plugins but he built from scratch and kind of addressed issues that exist and will continue existing because they get inherited from version to version and other solutions and that's my next step for this particular client like applying the object cast, configuring it and seeing if we can do something about the login user lag.
Then there's little things too Right. So the logged in users might not just. It's uncached requests may mean like an image too. Stuff like that should be offloaded. There's no reason to serve images from your actual website if you're not using some sort of edge side caching in general with like really complicated rules. That might work. Might for logged in users.
Might. Might work, yeah. Then okay, you need to offload it. Offloading means use a CDN.
I know you're not a huge fan of CDNs anymore because I know edge side caching solves lots of things. I know, I know. I see your videos.
[00:48:31] Speaker A: Yeah, yeah, yeah. So there's.
Are you calling me out? Huh?
[00:48:36] Speaker B: It makes sense though.
[00:48:38] Speaker A: A CDN has an advantage to a point.
If that CDN is. In most scenarios it's an extra layer with an extra level of latency in some way.
And this is why my preference has slightly, very firmly moved into the Cloudflare ecosphere for the simple reason that we nowadays have R2 available inside Cloudflare, which makes it fully configurable on the CDN layer that is Cloudflare. And it just makes for me, from my perspective of not wanting to introduce all these extra, different type of, yeah, let's call it latencies in the broader sense of the word. It just makes a lot of sense to just say move it over to cloudflare R2, which for those of you who don't know what that is, it's basically Cloudflare's equivalent of Amazon S3 bucket hosting. I'm not a big fan of Amazon just for the complexity of how stuff is configured and all that, but Cloudflare is for me a logical layer. It's also a layer that there is no egress, meaning the cost, the actual cost of storing the images is really low and you literally only pay for what is actually being used in transfer.
And that's a.
[00:50:07] Speaker B: There's.
[00:50:08] Speaker A: I do some calculations, there's. I have a few clients that are moving over to S3 and just saving a few hundred bucks a month. That's. That's big for essentially the, the same service. And as I just argued, a tighter integration into using Cloudflare.
[00:50:23] Speaker B: Anyway, moving into RQ, leaving from S3. Right.
[00:50:28] Speaker A: Leaving from S3 into R2. And apparently it's happening so often that Cloudflare has an S3 import function. So you can just.
I didn't know that you can make.
[00:50:42] Speaker B: Your life because I mean with all the cloud providers that say you get charged by, I don't know, the usage per. Per second you can't calculate that. No one can calculate that.
[00:50:55] Speaker A: No, no, no. I like. So I like CDNs, but I like them to be.
I, I have a strong preference to have the CDN behave in, in a. As modern fashion as possible. And in my opinion, that is certainly how Cloudflare does things.
[00:51:19] Speaker B: That makes sense. I never considered Cloudflare as a CDN provider because they're just a caching engine. A caching engine with points of presence. This is everywhere.
[00:51:28] Speaker A: Yeah, I think in the last year they added like 100, 150 or the year and a half, two years, whatever. But it's, it's a. There's in total, I think 350, close to 400 data centers across the globe. That is insane. That's an insane number. And you know, if you, you are right in the sense that it's just a layer.
But if you, if you were to add all your statics to Cloudflare, which it does automatically as soon as you put that layer in front of it, and then add just a few rules where certain, you know, files, images, PDFs, JavaScripts, any other type of assets, you can just say cache long. This is a good example where in certain cases you can say it's fine if you store this image for a year, that's never going to change.
It may not be called inside that post anymore, but the image itself is never going to change. So it's fine.
[00:52:29] Speaker B: Yeah, that makes sense. Especially. Okay, so let's say you've done that.
Another, like, really important step is documenting that you've done something like that. So if, knock on wood, anything ever happens to Remkas, the next guy doesn't have to figure out why he's hitting refresh and the image isn't changing. Yeah, yeah, documentation is really important.
But yeah, you could fix that. You could do. You could probably cache like AJAX requests too, if you're sure enough about it. There's definitely cachable ajarc request. You could maybe do the same thing with the query strings. Cookie caching.
[00:53:05] Speaker A: Cookie caching, yep.
[00:53:07] Speaker B: Yeah, could be messy as hell, but.
[00:53:10] Speaker A: I wouldn't recommend it. But there are scenarios where it makes sense.
But yeah, you know, that is my.
I think I'm slowly moving into that as my default way to solve CDNs.
[00:53:25] Speaker B: And then there's like, the issue. I recently had a conversation in the form of a podcast with Elena from Woo Ninjas that had to do with WooCommerce at a scale. Like, what do you do when you scale with WooCommerce and one of her questions was a very hard one. Personalized caching. So like caching for logged in users, what do you do there? Usually nothing.
Just out of the box. Usually nothing. But if you've got like really good collaboration with your developer or if the provider has the provider with the actual systems has really good collaboration with the developer and there's a certain like way that all the sites from that developer or agency are built, then you can single out like individual parts of the HTML. You could say everything from this tag to this tag start to this tag end is always going to be the same. Like maybe exactly.
In Varnish we call it Edge sides. Edge. Edge side includes like cache excludes. People exclude things. Like anything that generates some sort of one time code.
What are those called? The nuances.
[00:54:38] Speaker A: Yeah.
[00:54:39] Speaker B: Yes. Anything that generates that really hard to say word we want to not include in the cache. So those are excludes. Excludes are all over the place. Everyone's kind of familiar with excluding things, but including things is a bit different. So let's say you had a static banner on your website for logged in users. You can catch that. But you need really good collaboration between the person developing the site and the either provider. If you're using something like Varnish or maybe if you're doing it yourself on cloudflare and you have both the developer. Okay, fine.
[00:55:14] Speaker A: Yeah. I think there's an extra option available. There is Andrei Savchenko Rarst on the Socials. He has a plugin called Fragment caching. It's on GitHub and it's a good base to use for these types of scenarios where you can basically you essentially have a mechanism to determine your fragment caching inside of WordPress. It's quite cool.
[00:55:46] Speaker B: I'm going to look that up. What was his name?
[00:55:49] Speaker A: Andrei Savchenko. I think he's just online. Basically is known as Rarst R a r s t.net that's cool.
[00:56:00] Speaker B: I'm happy I learned something and have something to look into after this call.
[00:56:03] Speaker A: Oh, cool. Cool.
But yeah.
[00:56:09] Speaker B: Which isn't really big.
I don't think it should be big either. There's way too much room for error. Way too much. Unless everything was standardized.
[00:56:19] Speaker A: Yeah, I was going to say if you have a standardized environment and you're pretty sure that you can determine which sections you can safely fragmently cache, then please do. Otherwise, just stay the hell away from it because it's just you're opening a can of worms in. In things that are just going to mess up.
[00:56:46] Speaker B: And in the case you're losing sales.
[00:56:52] Speaker A: Yo yo. It will ruin yourselves. Yeah, I was going to say I like how we we got from you describing what you do for your work to basically progressing into explaining every single caching layer available to us in. In that that it. They all hook into determining what is the scalability of your site. Because most people think I just throw more hardware or definitely.
[00:57:18] Speaker B: I mean okay, your example for what do you. What do I do after I identify bottlenecks? The particular situation that I described for the client that I'm currently running a reference on performance audit for now it has to do with logged in requests. It could not have been. It's usually, almost always logged in requests. If it's not logged in requests, I would think something's wrong with the actual site.
Like yeah, because caching like static content or caching non logged in stuff, it's pretty straightforward. A plugin can't really screw that up. Part a certain custom site configuration or maybe custom AJAX request that runs after you scroll something. So it could bring something back. Yeah. That could sit there and load forever. Fine.
Especially if something's going out. Like you get a 503 that you never see like assist a server error and you can't see the 503. JavaScript's not going to be like oh yeah, you have to actually go into logs.
[00:58:22] Speaker A: It's going to go, well let me try that again.
[00:58:24] Speaker B: Over and over and over. That's where the front end stuff like the core web vitals come into play.
[00:58:29] Speaker A: Yeah, no, I was going to say this is where what people see as the layer. For anybody understanding all the steps that we have outlined, this is actually the final layer. So the optimization of the output of your front end HTML/, CSS/, JavaScript. That is the last part. That is the part where you go, okay, I've done my smartest queries, I've done my smartest bootstrapping, my loading, the way I all of the things I'm doing. I have got everything optimized in all the different various caching layers and yet I'm still running into issues. Oh, and you know. No, no, I'm not saying I'm saying that wrong. You're not running an issue, but you're running into. Okay, and now there's one layer left to do and that is indeed front end optimization, which is what I like. WP Rocket, the example that we talked about, I use that plugin for that mostly not for actually generating HTML. I usually turn that off and solve that differently.
[00:59:38] Speaker B: Okay, that's acceptable too. I Would. Okay, I'm good with that. Like, that sits well with me, too.
So you've now brought up using a plugin for very specific tasks. You and every other performance engineer out there have probably said this site is slow because it's got a million plugins. Well, it's not necessarily the million plugins. It's that each plugin does a million things too. Right. And you don't need that. It could be a million plugins that do one thing, but it's usually not. It's great to use a plugin like you described now, like, I'm going to use this plugin for that particular feature and for that to be available and for that plugin to have been. This is a rant now. And for that plugin to have been made in a sense where when you use a certain feature, that feature is enabled. Not everything is enabled by default.
[01:00:21] Speaker A: Correct.
[01:00:23] Speaker B: Yeah.
[01:00:24] Speaker A: Filter for this. So you can turn it off.
There's a setting inside it as well. But you want to solve this at code level as soon as it loads.
And you know, if, if, if the hosting environment has a proper solution of integrating whatever's being generated as HTML. But then on, on. On the hosting layer and you get to sync that to cloudflare. That is my preferred solution. Just makes it super easy. I'm not relying on plugins too much because you're right. The amount of plugins. And let. Let me be. Let us be very clear about that. The amount of plugins on your WordPress site is not the problem. It's what they do.
[01:01:02] Speaker B: Not necessarily. Exactly.
[01:01:04] Speaker A: I got clients who got close to 100 plugins or more, some of them. And they still load fine. Everything within sometimes even half a second. It's just. It's just.
It's not the number of plugins.
[01:01:19] Speaker B: Let's be clear, too, a plugin could be a plugin. At the end of the day, for someone that hasn't ever developed a plugin could just be four lines of code in a file that starts in a certain fashion and that gets recognized as a plugin. It doesn't necessarily have to be something huge. Like an SEO plugin is something huge. Yoast is something huge. It does a lot of work. It could just be one little thing that you don't want to include in something called the little snippets.
[01:01:47] Speaker A: Yeah. And those are plugins as well.
The whole performance optimization discord that we've seen in the last couple of years, it's finally changing a little bit. But the vast majority of people still think Add a cachings plugin, we're done doing some front end optimization. That's it, we're done. And just keep the number of plugins real low and we're done. Done. And just that's just not the case. I'm glad to hear that you agree because this is a signal. We need to, we need to talk about it.
[01:02:25] Speaker B: You have to accept.
So at Presidium, the support team, the DevOps support team, more often than not, when they do see a site with a large number of plugins, in reality it's because of the large number of plugins, because of what those plugins do. Fine, I will put that asterisk there. But it is a first indicator. Like when you do a list with the WPCLI command for the plugins, the number. When you see a hundred, the first thing I'm going to think is like, okay, he's got plugins doing the same thing redundantly over and over. Things are activated that don't need to be activated. And our job sometimes is really boring. So you mentioned the bottlenecks, how maybe I would identify one. One way is literally running the test or running like a flat long test, an endurance test, and just going in and pushing, activate, deactivate. Yeah, yeah, sometimes that really helps. It really works. It's kind of boring. In time, with experience, you kind of have a feeling as to what plugin might be causing issues. But it does help.
We jumped around again. We were at the front end stuff, so all the stuff we talked about before was backend stuff. And then there's like the front end optimization that has to do with core web vitals. That not necessarily, but core web vitals do help you identify things and communicate them with other people. They're just metrics. Everything's just measuring something.
Sabrina, our colleague, actually performance engineer Sabrina, shared her process for how she does front end optimization. I think it was at wordcamp Porto.
I think she had a talk. Not sure which wordcamp it was.
[01:04:08] Speaker A: I don't think I've seen it, but I've seen her. Obviously I know know her, but I also know that she explained different sets of what she does at Nathan Wrigley's podcast.
[01:04:22] Speaker B: Oh, I haven't seen that. So we both did homework.
[01:04:26] Speaker A: We both do.
But yeah, there's a whole bunch of steps that you need to take. It's not just, again, it's not just turning on the plugin. You need to understand what it says, what the feedback is given, all of those things.
[01:04:40] Speaker B: Some things are kind of overlooked too, like how often do you really find websites using image formats that are webp or ever go on clients coming to me?
[01:04:55] Speaker A: Rarely. How many leaving with that format? A lot.
[01:04:59] Speaker B: Yeah, definitely. Plugins are starting to help there too. You know, with like generating. Automatically generating the webp version and giving you the option to use the PNG on browsers that don't support WebP. But even that's really trivial. Which browser that most people have that has a large market share and what version has a large market share of what Browser doesn't support WebP or Avif.
[01:05:22] Speaker A: Yeah, I think Avif is probably the question mark. But the rest most certainly are using webpage.
[01:05:31] Speaker B: Sure.
[01:05:33] Speaker A: Yep.
[01:05:34] Speaker B: Those are small things we can do on the front end side. So the first thing I would always look at is images. Lcp largest contentful paint. The biggest thing on your page.
The first largest thing on your page that shows up above the fold. So on the visible part of the screen, it's all. It's not always going to be an image actually. It could be. It could be text, it could be some loader, it could be. What are those things? Carousel. It could be some carousel, some JavaScript. Well, I guess it is an image. In that case, if you're listening, don't do sliders.
[01:06:06] Speaker A: Don't do carousels.
[01:06:07] Speaker B: They're just. It's gross.
[01:06:09] Speaker A: Nobody clicks on them, nobody waits for them and they're making your site slower. So don't.
[01:06:14] Speaker B: Yeah, you could do like, maybe when you scroll down you can have one of those sliders, like with the multiple images that kind of either auto rotate or you have to push like left and right. People do that with their partners, for instance. That's fine.
[01:06:28] Speaker A: Yeah. And then that, that kind of makes sense. But the, the large big header image on top that then slides into a gallery and all those things just stop. People, that's. You're not helping anybody, certainly not your visitors.
[01:06:41] Speaker B: What about video? When you load a page and the whole background is a video.
[01:06:46] Speaker A: Now there is a smart way of doing that. So the impact is. Is you can reduce the impact of that to close to nothing other than it's still loading in quite a lot of MBs just to show something changing.
Slash.
[01:06:58] Speaker B: Yeah, you need to show them something.
[01:07:00] Speaker A: Yeah.
[01:07:01] Speaker B: So you could have the video in the background, but you need to show. What's that called?
I don't remember.
[01:07:08] Speaker A: Not sure what you're referring to.
[01:07:11] Speaker B: When. Crap. I don't remember what it's called.
No, not lazy loading, not lazy loading. But when you have like the image, you have YouTube you have YouTube, what's it called? Embedded. And then rather than the video starting automatically, you have video.
Right.
[01:07:30] Speaker A: Just show the thumbnail. Upon clicking on the thumbnail, it actually then starts to load instead of loading by default. Yeah, there's a whole bunch of fun stuff that. How much of your work is around this? How much do you need to look at the front end? Making the site smarter.
[01:07:48] Speaker B: In my daily life at Presidium, it's mostly front end stuff because most of the backend stuff is already, you know, there. It's fixed because if it wasn't, then the hosting provider would have an issue. Right. And our DevOps support team does cover most of those issues.
And if they don't, okay, I can get involved, but they don't need me. And our SRE team definitely doesn't need me either. But our customers do. And most of the issues our customers have are with the way they set up their website. So most of the issues I do encounter, which is a really weird use case. Right. So if I was doing freelancing, it would be the other way around for sure. If I was logging into some server that someone set up somewhere, it would definitely be install php, fpm, use php, ftm, put some sort of caching mechanism on there, maybe just Apache, cgi. It would definitely be more backend oriented and the impact would be huge from the backend stuff. But in my particular case, working at Presidium, where we do want to focus on reliability and pro performance and security. Everyone says security, everyone focuses on it. Super standardized. I don't see why people are so like proud of themselves for, oh, high security. It's super standardized and it's like everywhere. You have to like screw something up not to provide something secure these days.
[01:09:00] Speaker A: Usp, it's not a unique selling point.
[01:09:03] Speaker B: Exactly. It's boring. No one wants to hear about it. Everyone cares about it. But I have to realize that it's there. It's usually there by default and you have to break it.
[01:09:12] Speaker A: Yeah.
[01:09:14] Speaker B: Whereas reliability and performance are not there.
[01:09:17] Speaker A: Those are more, way more unique than you actually think.
[01:09:21] Speaker B: Yeah, yeah.
Like true reliability too. Like. Okay, so small rant. I don't like providers that say they provide high availability arrays of like systems in the back end when they use something like AWS's. RDS.
Yeah, the relationship Database Service. Which means that it is highly available because Amazon says it is and because Amazon made it to be that way. Okay, you're just like reselling someone else's service and you're just accepting the fact that they tell you that it's highly available and fine. Same goes for using a cloud service like GCP with their database solution. I don't remember what it was called. What acronym did they have for that?
It's some acronym. I don't know. I don't like this. I don't. Because I would rather prefer you do it yourself. Like, you know, put actual VMs up there, put actual servers, put them side by side, add more, remove some, make it all custom for your, you know, particular client. Especially if it's, if it's a resource heavy website. It's so much more respectable. I mean, come on.
[01:10:25] Speaker A: Yeah, I agree, I agree.
I'm not a, I'm not a big fan of cloud hosting as a, as a, as a concept. I like. I mean, having. Having your own cloud. Yes. Relying on an external cloud.
I think we could dedicate a whole podcast on just that concept of all the things that are going wrong with that.
[01:10:49] Speaker B: Yeah. I mean, how do you troubleshoot? Like if you don't know how it works, how do you.
[01:10:54] Speaker A: Yeah, yeah.
[01:10:56] Speaker B: By opening a support ticket to your cloud provider on behalf of your client. Boeing. It's terrible.
[01:11:02] Speaker A: Yeah, that's. That's a horrible one.
[01:11:04] Speaker B: But yeah, my, my day to day business as far as performance audits do go. We have some performance audit service up now at Presidium.
It has to do with front end stuff primarily because that's where our customers have more issues.
And it's almost always something related to like LCP is always bad. LCV is always bad. Yeah, it's the first thing that you. It shouldn't be the first thing that I tackle because there's a series of things you should do first, like, I don't know, fcp, first contentable paint. But usually at the end of the day when I bring first contental paint or largest contentful paint towards first contentful paint, everything is good.
[01:11:43] Speaker A: Yeah, yeah.
[01:11:45] Speaker B: As long as there's not a big gap.
[01:11:48] Speaker A: The large is called the large for a reason.
No, because it has a lot of.
[01:11:53] Speaker B: Guys together and your site's like, poof.
[01:11:56] Speaker A: Poof. I love them. I love them like that.
All right.
[01:12:00] Speaker B: I never really had the flexibility to say, oh, you want to close up? Anyway, a small point. I never really had the flexibility to say what most of our performance engineers or most of our performance oriented people want to say. Delete it, remove it, get rid of it. Like if it's a huge image, just get rid of it.
[01:12:15] Speaker A: I will actually say that a lot of times when dealing with the front end optimization things to do that. That is what I will say, because it just doesn't make sense. You want it there because it's shiny, but it doesn't bring you any value. It's holding you back. It's. It's causing, you know, all kinds of issues that are in some. Some level, just anywhere between annoying and just hurting you.
[01:12:42] Speaker B: No one cares.
[01:12:42] Speaker A: Nobody cares. Just delete it.
[01:12:44] Speaker B: No one cares.
[01:12:45] Speaker A: Bring it back to what it needs to be. Make it fancy if you need to, but that's it. Yeah, exactly. I was going to wrap up the conversation because we've been well over an hour.
[01:12:57] Speaker B: Oh, time flies.
[01:12:59] Speaker A: Yeah, time flies. Time flies indeed. I want to thank you for a wonderful conversation where we really did a deep dive into caching optimization and smart things to consider as you're building a. Well, not baby, just wanting to have a performant website. I think people listening have the opportunity to learn a lot. So thank you for that.
[01:13:23] Speaker B: Thank you for the engaging conversation and for the opportunity too.
[01:13:27] Speaker A: You're most welcome. Thank you so much.