We donât just make sites faster for the sake of making them faster (as much as I would personally be cool with that anyway). We make them faster because by doing so, we make our sites more effective for businesses and provide a much better experience for users.
So itâs always nice to see folks sharing examples of how optimizing performance helped them out.
With Interaction to Next Paint supplanting First Input Delay in March as a vital, itâs natural that a lot of the focus will be on the search engine impact, but thatâs not all we should be focusing on.
Thereâs a new case study from Trendyol talking about how a 50% reduction in INP (thanks to scheduler.yield
) resulted in a 1% improvement on click-through rates from their listings pages to their product detail pages, per user session.
They wonât be alone.
Cliff Crocker wrote a post for the SpeedCurve blog exploring why INP is so much worse on mobile devices than desktop. Thereâs a lot of great info in the post, but one finding in particular stood out: itâs not just that INP is worse on mobile, but INP on mobile is much more strongly correlated with business metrics like conversion rates.
Now seems like a great time to start addressing your INP issues, if you havenât started already.
The more you learn about how the internet works, the more you marvel that it ever works at all. Thereâs just a lot of plumbing, a lot of variability, and most of it is completely unseen and never thought of until something goes wrong.
Thatâs part of the reason why I really dig Lexi Mattickâs âHow Did I Get Here?ââit brings a little visibility to whatâs going on under the hood when you connect to a site, and itâs doing it in a way thatâs both educational and engaging.
One of the first things I do in a lot of my âGetting Started with Performanceâ workshops is have folks run a traceroute so they can see all the many hops their packets have to make along the way to their destination. Lexiâs site makes that all visibile.
When you access her site, she runs a traceroute from her server to your computer and then outputs the results in reverse to help you see the journey your packets take (as she points out, itâs not 100% accurate, but for the sake of education, itâs close enough).
Then she uses the results of that traceroute and a mix of static and dynamic content to explain what is happening in a very approachable way.
Itâs a wonderful side-project and a great educational resource.
The preconnect
resource hint doesnât get quite as much love and attention as the preload
hint, which is a shame because itâs probably (he says without looking at any data) the more widely applicable of the two. Itâs also a lot harder to shoot yourself in the foot with preconnect
.
But you can still misuse it, as Harry Roberts does a great job of detailing in his latest. CORS (cross-origin resource sharing if youâre being technical, cross-origin ritual sacrifice if youâre using my preferred definition courtesy of Monica Dinculescu) seems to trip folks up on, well, pretty much everything and preconnect
is no different. Harry walks through an example using the BBCâs site, but itâs one that likely applies for a lot of sites out there.
If youâve seen any of my talks or read my post, youâre probably sick of me harping on how important it is to test on slower devices and slower networks. Itâs a conversation I have often with my consulting clients as well. But if you want to broaden your reach, itâs absolutely essential.
Alex Russell has written some absolutely classic posts over the years on what he calls the âThe Performance Inequality Gapâ. One of the points heâs made, backed with serious data, is that your best bet for a test device is an Android device, a year or two old, around the $200 price point. Anything more powerful or newer is likely hiding issues.
The Wikimedia team basically made the same case in their Perf Calendar post. Theyâve been using a little CPU Benchmark they created to help them measure the speed of their userâs devices. Plotting that out over different geographies is eye-opening. While the US had decent CPU speed, the long-tail of users in India (95th percentile) roughly matches a Moto G5âfar from a powerful device.
Even more dramatic is South Africa. There, the 75th percentile roughly matches the Moto G5. In other words, they need a slower test device to represent 25% of South Africa traffic.
Itâs a great post, and yet another great reminder that performance testing should focus on testing on slower devices as the default.