If you’re looking for an example of exactly what not to do in terms of front-end performance, I can’t think of a better one than this - they threw away a lot of the performance optimizations browsers give us for free in a bizarre attempt at improving page loading, which ended up doing the opposite:
I was recently conducting some exploratory work for a potential client when I hit upon a pretty severe flaw in a design decision they’d made: They’d built a responsive image lazyloader in JavaScript which, by design, worked by:
- immediately applying
display: none;
to the<body>
;- waiting until the very last of the page’s images had arrived;
- once they’d arrived, removing the
display: none;
and gradually fading the page into visibility.Not only does this strike me as an unusual design decision—setting out to build a lazyloader and then having it intentionally block rendering—there had been no defensive strategy to answer the question: what if something goes wrong with image delivery?
‘Something wrong’ is exactly what happened. Due to an imperfect combination of:
- images being completely unoptimised, plus;
- a misconfiguration with their image transformation service leading to double downloads for all images;
…they’d managed to place 27.9MB of images onto the Critical Path. Almost 30MB of previously non-render blocking assets had just been turned into blocking ones on purpose with no escape hatch. Start render time was as high as 27.1s over a cable connection1.
If you’re going to build an image loader that hides the whole page until all images are ready, you must also ask yourself what if the images don’t arrive?