🚀🔍 Performance and SEO
Over the past week or so, there was a fair amount of hubbub (yes, hubbub) made around some of Google’s recent comments around page experience (of which Core Web Vitals is a part) and their SEO rankings.
The confusion seems to stem from some announced upcoming changes to the Page Experience Report coupled with some changes they made to their ranking systems guide (page experience used to be specifically called out on that page, and now it’s not).
That lead to a brief flurry of some folks wondering all the work they put in around vitals mattered anymore. Google eventually clarified with a (very long) tweet.
It does not say page experience is somehow "retired" or that people should ignore things like Core Web Vitals or being mobile-friendly. The opposite. It says if you want to be successful with the core ranking systems of Google Search, consider these and other aspects of page experience.
Basically it comes down to page experience being a search “signal” and not a “system” (which makes use of signals). So in the end, it still matters. (You can be forgiven for misreading—the communication was a bit obtuse which I guess is to be expected considering how secretive they tend to be about the actual ranking algorithms.)
But let’s play the what-if game.
What if Google did decide that it no longer cared about web performance in its ranking algorithms? Would it matter?
The biggest hit, I think, would be losing the SEO carrot. It’s undeniable that having a bit of that SEO juice from Google has brought a ton of attention to web performance, and that’s a good thing.
But if they were to stop caring, to stop factoring it in—it shouldn’t change anything.
Google factors it in because it matters to the folks visiting these sites. Performance has been directly tied to better business metrics and better user experience over and over and over. And that's exactly why, SEO juice or not, it should be a priority.
We don’t make sites faster for the search engine bots.
We make them faster because in doing so, we provide a better experience for our users, we expand our reach, and our businesses benefit as a result.
🤷🏼♂️ “Contentful” doesn’t equal “contentful”
A few weeks ago, I mentioned Chrome’s change to now factor in image entropy (measured in bits per pixel) into is calculations as to whether or not an image is “contentful” and should qualify for Largest Contentful Paint.
It’s just the latest in a series of refinements to the metric.
They’re all solid and necessary changes to the metric. If this is going to be a metric that needs to be a fairly reliable assessment of “most important content” across millions of sites, some refinement is going to be necessary to make that a bit more likely to be true on any given page.
But it did get me thinking about how wide the gap is becoming between what “contentful” means in the context of Largest Contentful Paint and what it means in the context of First Contentful Paint—when the first piece of content is painted to the screen.
I’ve seen plenty of folks get confused, and understandably so.
I wrote a post going into how those definitions are different, why they’re that way, and some examples where the gap might be a bit confusing.
Read the post
Scott replied after reading the post, reminding me of another contributing factor to that confusion—FCP was originally positioned as the “hey, this is actually content now” metric in the Paint Timing API, but the result of the widening gap is that it’s starting to feel a bit more akin to just a slightly more particular version of it’s sibling, First Paint.
I’m not entirely sure what the solution is, but hopefully it gets cleaned up a bit. Confusion around metric definitions can lead to mistrust in the metrics, which can lead to them being ignored altogether. And with FCP being one of the very few “modern” metric with cross-browser support, I’d rather not see that happen here.