Solving Performance for WordPress in a Different Way with Ivailo Hristov

Episode 17 October 31, 2023 00:50:20
Solving Performance for WordPress in a Different Way with Ivailo Hristov
Within WordPress
Solving Performance for WordPress in a Different Way with Ivailo Hristov

Oct 31 2023 | 00:50:20

/

Show Notes

Welcome to another insightful episode of the Within WordPress podcast. In today’s episode, I'm thrilled to have a special guest, Ivailo Hristov, a seasoned developer, CTO and an integral part of the NitroPack team, which has been making waves in the world of website performance optimization.

Let's dive headfirst into the nitty-gritty of WordPress performance. With our focus on viewport considerations, we're discussing how to step up user experiences. Developers and website owners, listen up - we're also highlighting hurdles in achieving speedy, seamless, and intuitive sites.

In our chat, we'll navigate through a sea of tools available for gauging and enhancing website performance. Expect a wealth of insights and effective tips to help you make the right choices in optimizing your WordPress platform.

This episode's spotlight shines on NitroPack – what it is, how it's shaking things up, and Ivailo's critical role in developing it. Ivailo unpacks the NitroPack's revolutionary approach to blast away common performance optimization roadblocks. He elaborates on how NitroPack takes speed optimization to the next level, making it achievable and efficient for websites, regardless of their size.

This episode serves all – from WordPress newcomers to seasoned developers, and anyone in between. It's brimming with expert wisdom, relatable examples, and strategies that you can put into action. Be ready to catapult your website's performance to new heights.

So, tune in, soak up the wisdom, and get ready to transform the speed and efficiency of your WordPress website with the Within WordPress podcast!

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Welcome to Within WordPress, the podcast which will allow you to get to know all the people inside the WordPress community. And today with us, we have someone who's built a very interesting tool for WordPress. Ivayo. Welcome. [00:00:26] Speaker B: Hi again. [00:00:27] Speaker A: And yeah, how about you introduce yourself? [00:00:31] Speaker B: Yeah, so my name is Yivello, as you already mentioned, and I am part of the Nitropack team, currently running as a CTO. And within Nitropack, we've been dealing with site speed for quite a long time. [00:00:53] Speaker A: I like this topic already. [00:00:56] Speaker B: Sorry, what? [00:00:57] Speaker A: I like this topic already. So let's start with the most obvious question. Nitropack. It solves a lot of performance problems for sites. How did you get to building this? What triggered it? What is your origin story? [00:01:15] Speaker B: The origin story is quite a long one. We started back in 2012 or 2013 as a solution for OpenCard. It's an ecommerce platform. Yes, I remember back then, Nitropack, yeah, it was an extension for OpenCard for solving the site speed problem for ecommerce owners. So we actually started by solving primarily problems for ecommerce websites. And over time, as we were building this, so first of all, we just noticed that many people are struggling with this. So we wanted to provide a solution, and that's how we entered the site speed optimization area. And the more we tug into it, the more problems we saw that needs to be solved, right? Yeah. So we saw different types of issues that people are struggling with. So one could be issues with how fast you're serving requests to your users. But then we also saw a big kind of struggle of people to get the content that they serve for this content to render properly and render fast on the end user devices. As we gathered more clients, we started noticing patterns. We started dealing with different use cases, different devices, different network connections. So this is how we essentially began to realize how important it is that your site performs well in a real world environment. Right. [00:03:10] Speaker A: I think ecommerce is a good indicator in terms of. Yeah, if you're not performing with an ecommerce site, it's actually hurting. Like really hurting. Not hypothetically. It's a real burp. [00:03:25] Speaker B: Yeah. And it's very easy to measure it in an ecommerce website. Right. So yeah, this is essentially how we started. And then the more problems we wanted to solve, the more challenges we were facing in terms of limitations from, let's say, the host you're running on, or the platform being. OpenCard is a PHP based system, so solutions need to be written in PHP, for example, for different problems. And this just puts a lot of limitations on what you can provide as a solution. As time was passing, we started thinking of how we can solve more of these problems in a way that we are not limited by the environment that your site is running in. So a few years passed and we came up with an idea how to do it. [00:04:33] Speaker A: This was still you focusing on OpenCard mostly or only? [00:04:37] Speaker B: Yes, we were still focusing mostly on OpenCard, but we wanted to make a solution that is not strictly for OpenCard. Like we saw that this is an issue that a problem that needs to be solved for many websites across different platforms. So we started thinking of how to do it, and I think it was in the summer of 2018 is when we came up with an idea how we can build this. So that's how the nitropack in its current form was born, let's say, over a lunch discussion. Yeah, and since then, we built it as a cloud based solution. So nitropack became from just an extension for OpenCards, it became an API that any site can utilize to be optimized. And to this day, this API is actually publicly documented in Docs, NaturoPac, io. So the integration that we are building now, since then, we built integrations with WordPress and Magento as an addition to OpenCard. But we also have solutions for any website and we do optimize different platforms, but not in the same first party manner. But yeah, the API can be used by any website to be optimized and what we use for our solutions. Yeah. [00:06:20] Speaker A: I take it you work with different profiles. Like if a particular CMS connects to your service, can anybody just hook into it or do you need a special integration happening? Because I think I know there's anybody. [00:06:35] Speaker B: Can hook into it, actually. Okay, interesting. Yeah. Imagine that you have a browser. Like your browser can open any site, right? Yeah. So we want to give a similar experience in terms of optimizing your website. We rely on the core principles of how the web is built and we understand those, and this is how we optimize. So in theory, any website can be optimized. So the way you would optimize a website manually. Right. The tools and the techniques. The tools might be different, but the techniques that you're going to be using are similar across platforms. Of course, some platforms might have some special tools that if you utilize them, you can optimize them better. So we try to do this, but in general, even platforms that are unknown to nitropack can hook into it and be optimized. [00:07:34] Speaker A: Yeah, but my mind goes into like, if and this is probably possibly way too deep into a different direction. But let's say I have built something custom myself, some sort of e commerce solution. I would need some way to exclude whatever nitropack is doing on the caching side of things when I'm in my. Like, is it that level of depth that your API is programmable? Like, do not do this on this types of URLs or when a cookie is there, that sort of stuff? [00:08:13] Speaker B: Yes, absolutely. [00:08:14] Speaker A: Oh, interesting. Okay, so you really are multiplatform. So when was it you decided that WordPress was probably a smart way to gain more traction? Because I remember the moment nitropack became more known and it was just out of the blue, like, all of a sudden, there you were. [00:08:35] Speaker B: Yeah. [00:08:36] Speaker A: What triggered that? [00:08:39] Speaker B: Yeah. So by the end of 2018, when we had this API ready to be used more broadly, we started thinking like, okay, so how do we solve this problem for as many sites as possible? Because, of course, part of our motivation for this is purely because we want to do it and we want to see the benefits of it on as many websites as possible. Right? Yeah. SO we also offer the free plan because this just gives us a way to just, more and more sites are available to join. Right. So we remove some barriers that way. By the end of 2018, we were thinking of how to distribute this to as many website, as many websites as possible. And the goal is, it has always been, and the goal was to optimize as much of the web as possible. Right. [00:09:49] Speaker A: Then WordPress is another. [00:09:50] Speaker B: We're still thinking of different ways to do it because you have a lot of places where you can provide solutions. There are different. Some sites are using a special build system to, to be built and then distributed to the web. So there are a lot of challenges to this, but we're still thinking of, okay, we are imagining this future where you click on a website and it's loading instantly at all times, at least for us. This is very interesting topic and very something that we are passionate about, and we just want to see it happen just from a personal standpoint. You know how when you want to have something as an engineer, you think, okay, I can build this, I want to build it, let's build it. And then you want this to think whatever it is to be used by everybody. By everybody. This is like the great satisfaction. [00:10:59] Speaker A: Yeah, I get that. I get that. What are the biggest challenges you saw when WordPress and performance optimization became part of your agenda? And I'm including Woocommerce here because coming from open card type background, I'm sure you quickly also specifically looked into woocommerce optimization. What are some hurdles that you saw? [00:11:28] Speaker B: Oh boy. So we saw what would be the biggest ones. I'm not sure how to rate them, but surely there have been the things of, let's say, scaling. Let's say the way this is built with an infrastructure like ours is definitely a challenge to handle as much optimization as we do in a day or in an hour or whatever amount of time. So that's a very interesting one for us. And it was a very challenging thing. And it still is, of course, because the goal is these optimizations to happen almost instantly, but that one, and then we also have had challenges with figuring out the most optimal way of, let's say, purging the cache. Right. And refreshing the cache. [00:12:36] Speaker A: These are specific for WordPress and woocommerce, more difficult than other platforms. [00:12:42] Speaker B: I think this is global for all platforms. One of the hardest things is to know when you need to refresh your cache, let's say, because most of, not most, but you need to have caching on many layers in order for a site to have high performance, especially if you want to make it on a global scale. It's unrealistic to think that you can just do it with a powerful server and everything is going to work magically. You need to deal with cache. So managing this cache layer is especially because, for example, in WordPress and woocommerce, I don't think it's good. There is this culture where most of the, especially the managed hosting solutions are providing a caching layer already. So whenever you provide a caching solution like Nitropack, and you want to distribute it, you need to be working well with the existing infrastructure for caching, right? [00:13:53] Speaker A: Yeah, for sure. [00:13:55] Speaker B: But then people are adding, let's say cloudflare or some other caching layer on top of that. So you end up with three or four caching layers, which is definitely challenging, I would say. And at least it has been for us. If we want to make use of all of them, our mentality is like, okay, so this infrastructure is there, let's see how we can utilize it to deliver the best experience. Yeah, so that's definitely a challenging one. [00:14:35] Speaker A: I can imagine, because you, caching in and of itself is an interesting beast, the amount of layers that you have, like even inside the application. Right? So WordPress has caching, has object caching, has transient caching, then has output of HTML caching in some way, especially certain hosts have, and some of them do that in file systems, some of them do that in Nginx, it becomes a very large mix. So that's just the application and then whatever's happening in front. So cloudflare is an often used solution, I guess Fukuri as well. [00:15:19] Speaker B: Yeah. [00:15:21] Speaker A: And then you join the mix. Yeah, that's a challenging one. [00:15:27] Speaker B: So. [00:15:30] Speaker A: I'm very curious in terms of how you solve, because I agree with you, caching when not done correctly is a pain in the ass to solve correctly. Right. So I know of hosting companies that have a mediocre, at best caching solution internally, which is just meant to solve some stress from the servers and that's it. It doesn't really solve the actual caching problem. So how do you deal with that? How do you work with caching that is just not set up correctly? Do you reach out? Do you just work around? What is the goal there? [00:16:08] Speaker B: Yeah, we typically try to work around if we can, but whenever we see an instance like this, we reach out and try to work with the hosting provider. Or even if it's, let's say custom managed service or whatever, we do reach out and try to make it correct. Fortunately for at least for the most popular hosting solutions, it is fairly well done and it just works out of the box. Like we are able to connect and to hook into these systems ourselves. But it's definitely an interesting one. And on this topic, while we are on caching, I would like to say if we go back to side speed in the real world and why it matters and how to achieve it, I want to say that if you want to achieve this speed for your real users, because they're likely going to be distributed in terms of geography, you need to be dealing with this. And I see that many people are either afraid or they have had bad experience with cache and when they see something about cache, they try to turn it off. But if you turn it off, you're likely going to not have great results if you go like a little bit far away from your server. So it might work well for a radius of, let's say 500 to 1000 km. But beyond that, you typically need to be dealing with a cache layer. [00:18:12] Speaker A: Yeah, the latency that increases across the globe. Like you can have a perfectly blazingly fast website, even uncached, you can have that one in New York. But if somebody from Japan visits or Australia, they are not going to see a stellar solution. Yeah, I've seen that assumption go wrong many, many times. And they're like, yeah, it's perfectly cached. Yes, very locally, but it's perfectly cached. Wonderful yeah, I've also seen the example of, I was working on an ecommerce store trying to optimize it and I think it was hosted in Los Angeles or something, at least on the West Coast. And they saw, they concluded themselves, customers are complaining the site is going slow when they add something in their cart. And we don't understand because we have everything cached. So the base principle of understanding that your site, your ecommerce site works with uncached version of your site as soon as something is in cart, because you cannot serve cache stuff, well, maybe some static stuff, but definitely not the contents of the page. That was entirely new to them. I'm like, how did you not know? That's interesting because you're working under the assumption that it's fast all the time, everywhere, in every scenario that's not real. Are you having to fight a lot of incorrect assumptions of how performance and speed and metrics and stuff like that, how they work or is that something you battle with? [00:19:56] Speaker B: Depends on the definition of what we battle with. But we definitely do have that in mind and we have to, let's say, educate the public. That's, I think one, I think this is a little bit underserved, like exactly. These special cases and scenarios are the ones that right now I think do need more attention because there are many articles, there's a lot of information about caching your website and the benefits of it, how to do it. But in the real world, in reality, they typically skip exactly that. This part, this is the hardest part. Right? When you need customize it, when you need to be unique for each visitor, how to that one? Because whenever you build your great user experience and you build your great features, especially in an ecommerce site which work only when you're logged in or your site is great and you want your visitors to be buying more products, let's say per single transaction, if you don't have a good strategy of how to do this, when you have those visitors logged in or if they have items in their cart, you're essentially going back to square one and you have nothing. And then you wonder why, what's causing this? We thought we have this optimization, that optimization, the caching layers and everything and just doesn't work. That's simple. Geography again, is also one of these factors. Even if you have your site cached, even if, let's say your site is completely static, you don't have ability to log in, you don't have ability, let's say for adding items to cart. Again, if you don't pay attention to, let's say, geography again, you might see not great results and then wonder why this is happening and Google themselves. This is one of the reasons. If you know in search console, in your field data, I'm not sure whether you can do it in search console, but if you check the public crux data, there are tools to do that. You can browse by country, so you can inspect how your website is behaving across geographies. And that's one of the reasons why, because this is typically overlooked. But it plays a very big role in how your site speed, actually how your site behaves in terms of speed across the globe. [00:22:57] Speaker A: I would imagine that your decision going from a module or local on the server type solution to a cloud solution is also with this in mind, right? So that to solve that problem globally, because you're not just solving it for the server instance itself and that, what did you call it? 1000 km radius? [00:23:21] Speaker B: Yeah, 500 to 1000 is probably. [00:23:23] Speaker A: So then you're also solving it by going globally. I like that. I think that's something similar to what Cloudflare does as well. They have like 300 data centers, roughly around the same idea. Right. So they don't want to solve something locally, they want to solve it for everybody, everywhere. Yeah, the whole ecommerce, ecommerce, but also membership sites. Not fully understanding impact of performance. When people are logged in, I offer WordPress performance scans. So I have two versions, but both of them have a results template where I have a dedicated section already written just for this. Because nine out of ten people have no idea that that actually exists. Like you invalidate cache the moment you are logged in. So whatever your hardware is doing or not doing, you're going to run into that particular limit. How do you solve that? Because I'm imagining that's something you see as well. [00:24:24] Speaker B: Right. [00:24:24] Speaker A: So people come to nitropack and they go, okay, great, good solution. Let me use that. They turn it on and then they're logged in and they say, yeah, it's not just not really what I expected. I assume that's a problem you have to solve. How do you solve that? [00:24:44] Speaker B: Yes. So a little bit of backstory because I just want to set the context of what's happening. Sure. In the background. Typically when you need to serve a cache for a certain page, you need to have somewhere, a file or in some sort of storage, you need to keep the version of this page pre calculated, let's say pre rendered or whatever you want to say it for this specific use case, for this specific page and a lot of factors that go into this. But as soon as you log in, your environment changes and whatever you have pre prepared for a certain page needs to become like, it becomes invalid, need to prepare it with the environment of this user. So typically what many solutions do and how this problem is generally solved is you start generating your new versions of cached pages or rendered pages with this user in mind. But imagine what happens when you have thousands of users like this can grow so much. And if you decide to keep this in memory, you will need just a ton of memory on your server. Or if you're using file system, you need many tens of gigabytes typically to store these cache files. And they are used like just a few times, one, two, three times, which is super inefficient. So the way we're solving this is, and I'm not sure whether I can share this right now, but we've had this solution for quite a long time. It's just not generally, if you send a request to us, we'll generally solve it for you. We have a solution for it for this. But what we are doing is we have a way to reuse the cache that is made for your public pages. [00:26:54] Speaker A: Okay. [00:26:54] Speaker B: Yeah. So this makes it a very efficient process in terms of the memory or the disk space that is going to be used. Also the so called cache heat ratio. So this keeps the heat ratio very high the way we solve it. I'm not sure whether I can explain how we solve it, but I can definitely tell you that this is something. If you're trying to solve this problem on your own, just have this in mind that if you are building a cache for every single user individually, this is not a very efficient strategy. Yeah. That doesn't scale going to be scalable, it's not going to really solve your problem. No, you do it, but in terms of results, don't expect this to be great, especially if you want to, if you have a global audience and you want to have a solution globally, this just doesn't scale and doesn't perform. [00:27:58] Speaker A: No, because assuming you're solving it on your own server, that might work. It's going to use a lot of resources, going to use a lot of storage. But still, the problem of locality versus global, it just exacerbates it, it makes it worse. So you're not really solving it. [00:28:19] Speaker B: Yeah, but there are approaches, there are some more efficient and less efficient approaches to this, but having separate cache for each user is definitely the least efficient one in terms of results and everything just energy, especially nowadays where we've entered the more energy cautious area, if we can call it that way, because previously people were just throwing more power into everything and solving problems that way. Now we're trying to be conserving energy, and if you want to be conserving energy, you have to look at this problem from a lot of angles. So wherever you can save something, you should do it. [00:29:12] Speaker A: This reminds me of, yeah, I think that's a great explanation of one of the issues that you're running into with caching versus global and just a lot of users. This reminds me of one of the issues we ran into with the client, where the problem was not necessarily that there wasn't caching being done, but somebody had written a custom solution in JavaScript, which is definitely not the way to solve this. But the size of the cache grew so large that turning on caching actually made the site slower than when you just left it off. It just hogged that much of memory, that much of I O disuse. It was a developer trying to be smart and in a way he solved an issue but just didn't think about the consequences that went along with it. And I see a lot of these solutions, again with the audits that I do like, people try to solve stuff just because they read somewhere on the Internet. This is a way to do it. It's very interesting, the kind of stuff that you encounter. So if you look at the, this is again something I encountered a lot as well. Understanding metrics like what are you actually measuring? I'm imagining this is stuff that you're getting into as well, since you are offering a solution. So there's a before, then you introduce Nitropec, then there's an after. They're going to measure what they're thinking that they're measuring. What would you say are good ways of actually doing the. Knowing what you're measuring. Do you have any tools that you would recommend to use, like before and after? Do I just go to Lighthouse or what do I do? [00:31:19] Speaker B: Yeah. So I would suggest, first of all, regardless of what you're trying to use, let's say you want to use Nitropack and compare before and after with Nitropack, or just some change that you introduce. [00:31:35] Speaker A: Any change? [00:31:37] Speaker B: Any change, yes. So there are two ways to do it. And like one is called, you have the lab way of doing it, which is going to be a reproducible, more or less consistent way of doing it, and then you have your field data. So these are two different places where you can look, so lab data and lab results is what you can immediately get information from. So this is your reproducible scenario, always running the same conditions, the same hardware, the same, almost the same, of course, network conditions and everything. So it gives you a good way to compare one solution with another. Let's say your previous approach and then your current approach, let's say. And you can get some metrics, some tools like Lighthouse, they give you a score. You might get just raw metrics of numbers of, let's say, core vitals. Or if you're, let's say you're trying to optimize for, I don't know, time force byte or something that is not a core vital. But whatever you're trying to compare, you can get an instant result, an instant feedback using a lab instrument. There are a few tools that are popular, and Lighthouse is probably the most popular one. You have it built in your browser because, okay, it's built into Chrome, but a lot of browsers nowadays are Chrome based. So you have it in Chrome. Or you can use public tools like Google Space PT Insights, or there's GTmetrics, which also uses Lighthouse. You can go into web page test, which is again a lab based solution, and it gives you a few unique features that other solutions don't give you. Like you can set up your environment there, you can run custom scripts, you can test, let's say multiple consequence requests. So you can test how your site performs on the first page load and then on second or third page load. And these are all different, right, because the first page load is primary once per device, right? Yeah, for a certain amount of time. And then you can really optimize your experience for the second and third page loads. So depending on what you're focusing, you can use different tools, but definitely check out these big ones. Like if you're looking for a way to do lab tests, definitely Google Spacepreet Insights webpage test, GTMetrics Bingdom also has a good tool, and these are all for lab data. And it is important to understand that whatever these tools give you is going to be a lab result, which only makes sense if you compare it with another lab result for your use case. [00:34:51] Speaker A: That's a good point. [00:34:52] Speaker B: Apples and apples, yeah, they don't represent what you should expect from or the performance in the real world, right, for your visitors. So what this means is you might get, let's say, very good scores or very good result with your lab test, but then you might see not so great result in your field data or vice versa. You might get low results, but then you have good field data. So the second one you can see with big sites, like you can go to some of the most popular websites on the web, you can check their lab scores and you might see, oh, they have not so great lab score, but you can check their field data and you'll see that they're actually performing very well. So these two are not great. Like there isn't a clear correlation between the two. And you cannot say, if I have this score, I will have these results in the field data in the real world. [00:36:00] Speaker A: I think the difference between just going to lighthouse and for instance, GT metrics, that in itself is already, both are lab, obviously. So if you test GT metrics before and GTMetrics after, you're comparing apples with apples, but just comparing GTMetrics stuff versus what Lighthouse finds, that in and itself is already too big of a difference. Like, wait, I thought I was doing great. I'm not. What's going on here? Yeah. [00:36:29] Speaker B: So you should be looking, is looking for improvements or changes between tests with, let's say lab tests using the same tool. And then you can expect that you will have similar changes in your field data, but similar. I mean, if you had a positive change with your lab results, you can expect a positive change in field data, but the amount of the change, you should not expect the same thing. So even if you improved, let's say, your LCP, with lab tests, you should by, let's say, a half a second, you cannot expect that it's going to be half a second improvement in your field data. Yeah. [00:37:17] Speaker A: LCP is largest contentful paint for those not aware. [00:37:22] Speaker B: Yeah. [00:37:23] Speaker A: So one of the core web vital metrics, which is what Lighthouse is. [00:37:30] Speaker B: Yeah. So that's how you can use the tools for lab testing. So if you see a positive change there, you can expect a positive change in your field data. But what you should focus on is having your field data correct. Right. With field data, you also get specific numbers. So it's not like we see this pass or not pass or fail sometimes UIs, but you also have specific numbers for each of these metrics. So you should look to improve and to have your numbers in their best states, right? Yes. And, and also field data is collected. You cannot simulate field data. You need time to collect it. So with lab testing you can get instant feedback, so you know whether you're doing changes in the right direction. But then you cannot expect that you will see the changes in field data right away or from today or from tomorrow. [00:38:45] Speaker A: No. Do you have any good examples of how to collect field data and just fill in my assumption. I'm assuming that with field data you're referring to rum real user. Do you have a favorite there where you use this favorite? [00:39:06] Speaker B: Not particularly, other than what Crux already captures. [00:39:11] Speaker A: Crux is part of chromium. [00:39:14] Speaker B: Yeah. Visitors that use chrome or chromium, they will send data to this open data set, which is Prax, stands for the Chrome User Experience report, which developers around the world, browser developers, are using this data to understand the state of the web, right? Yeah. So to look for patterns, look for what's performing well, what is not performing well. So this is one of the biggest data sets and it's definitely an interesting one. So it's just important to know that when you're looking at this data, you have ways to query or to check your current status there for either your origin or a certain URL. But this data is like what they call a rolling number. So this number is calculated over data for the past 28. So whatever number you get today and you do some improvement, your improvement tomorrow, you'll not see the improvements straight away. You will have to wait until these changes really start meaning something into your data. And that's also because of course, it depends on the amount of, let's say if you make a very big change, you might start seeing changes in your field data, like sooner than if the difference is not that big. But generally speaking, you have to wait a few days or ideally 20 days. [00:41:11] Speaker A: You need volume, but you need volume of traffic. [00:41:14] Speaker B: Yeah. But recently Google also released the history API for Crux and I'm not sure how we can share a link. Maybe you can. [00:41:27] Speaker A: I can drop it. I can drop it in the description. [00:41:29] Speaker B: Yeah, in the description, yeah, I'll send it after the meeting. But there is this tool that you can visually query, the Crux History API, which gives you a very good understanding of how your site is performing over time on a week by week basis. So you have a six months worth of data there on a per week basis. So this is like a quicker way to see changes without having to wait 28 days. [00:42:01] Speaker A: So this is generated by anyone using a Chromium based browser. So that would be Chrome itself. Brave edge. [00:42:14] Speaker B: So I'm not sure about Chromium or just Chrome, to be honest. [00:42:17] Speaker A: Okay. Could be a difference there. [00:42:18] Speaker B: Yeah. Maybe the open source forks, maybe they don't have this. Okay, I'm not sure, but it's the. [00:42:25] Speaker A: Data gathered by anyone visiting your site. Yes, through the Chrome browser that's being recorded. That is then also from a historical perspective, available. [00:42:37] Speaker B: Yes. [00:42:37] Speaker A: Super interesting. [00:42:39] Speaker B: Yeah. So it's also important to understand that this is not every visitor will send data or will report data or not that Google will collect all of these data points because this is just from data standpoint, is going to be a very large data set. So what they do is they collect just a few percent of the traffic. And if your site doesn't have a lot of traffic, you might not actually have field data from the Crooks report. So from Google. So from the perspective of Google, your site, they don't have field data for it, so they cannot actually evaluate how it performs in the field. But you also have some third party tools that you can use. Like you can install something in your website to collect this data for you. So you can see, first of all, you can see changes in real time. You can make it so every visitor is sending data. So you know you have a much better overview and much more accurate overview of how things are performing. Yeah, so sorry. [00:44:09] Speaker A: No worries. I've been drinking water here as well for the same reason. Yeah, but I have the option to mute myself. [00:44:19] Speaker B: Yeah, a lot of talking here, but yeah, generally there are other tools that you can use to. [00:44:27] Speaker A: Do you have a favorite rum testing tool then? [00:44:31] Speaker B: My favorite is the one that we have internally. [00:44:37] Speaker A: That's the smart answer I get. So every now and then I share something on Twitter and then somebody asks me, oh, can you share that? And like, yeah, I technically can, but I choose not to because it's an internal tool that I worked on and it's proprietary. I don't have to share it. [00:44:58] Speaker B: Yeah, I don't even have a way to share it right now because there's no way to access the data. But you can use, let's say I think new relic has some solutions for this. [00:45:12] Speaker A: Yeah, they do. [00:45:14] Speaker B: So maybe look into that. Maybe just wait a little. I don't know if you're reading Metropack. I don't know. [00:45:24] Speaker A: Oh, interesting. [00:45:27] Speaker B: Yeah. [00:45:30] Speaker A: I think I heard something, but I'm not sure that I heard it. [00:45:34] Speaker B: Yeah, I'm also not sure. [00:45:36] Speaker A: Yeah, okay. But the question was, there's ways of measuring. You have clients obviously, onboarding to nitropack that then have questions like, okay, so I didn't see the difference. What am I measuring? How should I be measuring? But the base of the answer is you need to compare Apple to apples and you need to compare live data with lab or. Yeah, you need to compare live data with lab data. [00:46:08] Speaker B: Lab with lab. [00:46:10] Speaker A: Yeah, you compare lab between lab. So just lab to lab. But also if you make changes and something is improving on lab, you need to double check that it's on real live user data is also a positive impact. [00:46:26] Speaker B: Definitely. [00:46:27] Speaker A: I know there's ways I forgot where I saw it. If I remember where I saw it, I'll include it in the show notes. But I remember somebody creating a it was just one HTML page that would rank absolutely perfect for Lighthouse. But when you visited the URL, that site was slow. Like crazy amounts of slow and annoying stuff happening on the page just because they knew how to work around whatever Lighthouse was measuring. So it's a good example of never trust systems. But yes, you need to check yourselves as Nitropack has a good solution in there. I'm kind of curious myself, being one of the co founders of Work Camp Europe and also organizer. I saw you had a presence at work Camp Europe in Athens. Is there a specific, I don't want to say goal, but is there an idea you have in mind in terms of what your ideal type of WordPress user looks like? These are the people we target or types of sites. Is there anything you can share about that? [00:47:56] Speaker B: Yeah, as I kind of mentioned earlier, our goal is ideally to have every website using Nitropack. So there's no specifics there. We believe the site speed problem is a global one, so the solution should be a global one as well. [00:48:19] Speaker A: I like that. [00:48:21] Speaker B: Yeah. So we are just trying to make it work for any website. We do understand that different use cases might have a solution that needs. You need a slightly tuned solution to get the most out of your site and the most speed. So we're trying to provide this as well. Let's say again, if you have an ecommerce site, you might be occasionally running campaigns. You might be expecting just a lot of traffic for a small amount of time. Let's say you're in a promo or whatever. So we do optimize, for example, for this use case. Also, if you like the example that you gave, if you are logged in, if you have items in your cart, then you also have also popular sites which are blogging sites or just the portfolio websites. So generally we try to make it so that every website can be much more, much faster if they're using Nitropack. [00:49:34] Speaker A: I like that goal. So yeah, I think that answers at least a lot of my question in terms of what your aim is, what your solution is, and how you're integrating with the Wordpress community. So yeah, I want to thank you for being on the podcast. I appreciated the behind the scenes sort of look that you gave in terms of performance optimization for listeners to the podcast. They know that's obviously something I am more than a little bit enthusiastic about. So, yeah. Thank you for joining and best of luck. [00:50:15] Speaker B: Thank you again, because.

Other Episodes

Episode 15

October 06, 2023 00:57:13
Episode Cover

How Leslie Sim Went from WordPress Newbie to Building Newsletter Glue

In this episode of the Within WordPress podcast, Leslie Sim shares her journey from being new to WordPress to building Newsletter Glue, a plugin...

Listen

Episode 12

September 01, 2023 01:06:00
Episode Cover

Jason Cohen: On The Growth and Evolution of a WordPress Hosting Platform

In this episode of Within WordPress I talk with Jason Cohen. Jason delves into the fascinating journey of WP Engine, a renowned WordPress hosting...

Listen

Episode 19

November 10, 2023 00:57:23
Episode Cover

Embracing Community, Niching Down, and the Future of WordPress with Nat Miletic

In this insightful episode of Within WordPress, Remkus engages with Nat Miletic, the founder of Clio Websites, in a riveting discussion that spans the...

Listen