Paul Bakaus: “AMP is not a JS framework, we’re an HTML framework.”
We’re slowly making progress with metrics such as largest contentful paint, but it’s only the tip of the iceberg. Take, for example, run time performance: In order to accurately measure whether a page is fast when interacting with it, during crawl time a bot would have to interact with it, and probably for quite a while. Not only does that crawl get incredibly expensive, it gets worse as without cache, we could not trust the page stays fast, so a platform like Google would have to re-validate multiple times a day.
I didn’t even talk about privacy preserving pre-rendering, the main reason AMP pages have to be hosted on a proxying cache. Signed HTTP exchanges is a promising standard to decouple identity and serving of documents, and could be used to bring privacy-aware pre-fetch to non-AMP content, as long as we also solve the metric issue. And of course, all browsers have to buy in.
In many ways, I’m hoping this day comes rather sooner than later, because it will reveal just how much work AMP does to ensure great UX, and when publishers see how hard it is to implement from scratch with underlying standards, AMP might suddenly seem more attractive!
Paul, you mentioned problems with non-existent metrics and crawler performance when measuring them. But there is already a Chrome UX Report, a database of real-user experiences of the page-speed metrics. Yes, unfortunately, it only provides a small number of metrics so far, but couldn’t this database provide information if a page is fast enough?
The Chrome UX report could definitely be a building block for this. It doesn’t measure runtime performance yet, which is a gap here that we’d need to address, and another issue is the developer workflow: If you’re building a new site, and want to know whether your content is eligible for pre-rendering or not, you would have to wait a few weeks until you’d know. The AMP validator gives you this guarantee instantly.
Lighthouse is an interesting middle ground but is much less of a guarantee for great performance than real-world reports. All of these tools and metrics have their pros and cons – I believe it’ll end up being a combination of various tools and metrics.
We’ve just recently started to make decent progress with Bento. It’s an incredibly complex project, as it effectively requires a rewrite of AMP, one that makes AMP even more modular than it already is.
Currently, AMP expects to be in total control of the page and its DOM, and the main AMP library (our devs call it runtime) needs to be loaded along with subsequent components.
So if you wanted to use amp-sidebar on a site running, say, Angular, you could already kinda do it today, but it would be inefficient and error-prone, as you would have to load the whole runtime just to show a sidebar, and would need to be careful about not mutating the DOM after the sidebar has been rendered.
The Bento rewrite of AMP isolates components much better from each other, introduces a new lifecycle management API and will make components feel much more like traditional web components.
It’s important to note that Bento AMP has one specific purpose: To allow people to gradually upgrade to 100% AMP. Even with a 90% AMP site, developers don’t get the caching and pre-rendering benefits, which would be a shame. So we’re hoping Bento will make it easier to convert over time.
We have plenty of plans to improve AMP, but most can be categorized as turning AMP into an all-around great framework for content websites. We’re convinced that it’s too hard to build great stuff on the web today, and want to do our part to help. Stay tuned!
Thanks for the interview and wish you a pleasant stay in Prague!