JavaScript

Don't go single-page-app too soon, or how GitHub reimplementing navigation in JavaScript loses streaming capability

A few weeks ago I was at Heathrow airport getting a bit of work done before a flight, and I noticed something odd about the performance of GitHub: It was quicker to open links in a new window than simply click them.

[…]

When you load a page, the browser takes a network stream and pipes it to the HTML parser, and the HTML parser is piped to the document. This means the page can render progressively as it’s downloading. The page may be 100k, but it can render useful content after only 20k is received.

This is a great, ancient browser feature, but as developers we often engineer it away. Most load-time performance advice boils down to “show them what you got” - don’t hold back, don’t wait until you have everything before showing the user anything.

GitHub cares about performance so they server-render their pages. However, when navigating within the same tab navigation is entirely reimplemented using JavaScript. Something like…

Code language: JavaScript

// …lots of code to reimplement browser navigation…
const response = await fetch('page-data.inc');
const html = await response.text();
document.querySelector('.content').innerHTML = html;
// …loads more code to reimplement browser navigation…

This breaks the rule, as all of page-data.inc is downloaded before anything is done with it. The server-rendered version doesn’t hoard content this way, it streams, making it faster. For GitHub’s client-side render, a lot of JavaScript was written to make this slow.

I’m just using GitHub as an example here - this anti-pattern is used by almost every single-page-app.

Switching content in the page can have some benefits, especially if you have some heavy scripts, as you can update content without re-evaluating all that JS. But can we do that without losing streaming?

[…]

Newline-delimited JSON

A lot of sites deliver their dynamic updates as JSON. Unfortunately JSON isn’t a streaming-friendly format. There are streaming JSON parsers out there, but they aren’t easy to use.

So instead of delivering a chunk of JSON:

Code language: JavaScript

{
  "Comments": [
    {"author": "Alex", "body": "…"},
    {"author": "Jake", "body": "…"}
  ]
}

…deliver each JSON object on a new line:

Code language: JavaScript

{"author": "Alex", "body": "…"}
{"author": "Jake", "body": "…"}

This is called “newline-delimited JSON” and there’s a sort-of standard for it. Writing a parser for the above is much simpler. In 2017 we’ll be able to express this as a series of composable transform streams:

Code language: JavaScript

const response = await fetch('comments.ndjson');
const comments = response.body
  // From bytes to text:
  .pipeThrough(new TextDecoder())
  // Buffer until newlines:
  .pipeThrough(splitStream('\n'))
  // Parse chunks as JSON:
  .pipeThrough(parseJSON());
 
for await (const comment of comments) {
  // Process each comment and add it to the page:
  // (via whatever template or VDOM you're using)
  addCommentToPage(comment);
}

…where splitStream and parseJSON are reusable transform streams. But in the meantime, for maximum browser compatibility we can hack it on top of XHR.

Again, I’ve built a little demo where you can compare the two, here are the 3g results:

Versus normal JSON, ND-JSON gets content on screen 1.5 seconds sooner, although it isn’t quite as fast as the iframe solution. It has to wait for a complete JSON object before it can create elements, you may run into a lack-of-streaming if your JSON objects are huge.

Don’t go single-page-app too soon

As I mentioned above, GitHub wrote a lot of code to create this performance problem. Reimplementing navigations on the client is hard, and if you’re changing large parts of the page it might not be worth it.

[…]

[A] simple no-JavaScript browser navigation to a server rendered page is roughly as fast. The test page is really simple aside from the comments list, your mileage may vary if you have a lot of complex content repeated between pages (basically, I mean horrible ad scripts), but always test! You might be writing a lot of code for very little benefit, or even making it slower.

How Medium does progressive image loading

Recently, I was browsing a post on Medium and I spotted a nice image loading effect. First, load a small blurry image, and then transition to the large image. I found it pretty neat and wanted to dissect how it was done.

[…]

I have performed a WebPageTest test against this page on Medium where you can see how it loads too. And if you want to see it by yourself, open Medium’s post in your browser, disable the cache and throttle the response so it takes longer to fetch the images and you can see the effect.

Here is what is going on:

  1. Render a div where the image will be displayed. Medium uses a <div/> with a padding-bottom set to a percentage, which corresponds to the aspect ratio of the image. Thus, they prevent reflows while the images are loaded since everything is rendered in its final position. This has also been referred to as intrinsic placeholders.

  2. Load a tiny version of the image. At the moment, they seem to be requesting small JPEG thumbnails with a very low quality (e.g. 20%). The markup for this small image is returned in the initial HTML as an <img/>, so the browser starts fetching them right away.

  3. Once the image is loaded, it is drawn in a <canvas/>. Then, the image data is taken and passed through a custom blur() function You can see it, a bit scrambled, in the main-base.bundle JS file. This function is similar, though not identical, to StackBlur‘s blur function. At the same time, the main image is requested.

  4. Once the main image is loaded, it is shown and the canvas is hidden.

All the transitions are quite smooth, thanks to the CSS animations applied.

Cars with Broken Windshield Wipers

I was stopped at an intersection the other day. It was raining. The road on the other side sloped upwards, so I could see the stopped cars on the other side of the road kind of stadium-seating style. I could see all their windshield wipers going all at the same time, all out-of-sync with each other. Plus a few of them had seemingly kinda broken ones that flapped at awkward times and angles.

What does that have to do with web design and development? Nothing really, other than that I took the scene as inspiration to create something, and it ended up being an interesting hodgepodge of “tricks”.

Fun with Staggered Transitions - Cloud Four

I was prototyping a sliding view transition between nested levels of navigation:

While this sort of transition had worked well for past projects, it didn’t feel quite right for this one. I wanted it to have a little more personality, to feel like it was organically reacting to the item you selected.

I thought it might be cool to stagger the animation, so the selected item began moving first, with its siblings following as if tethered together:

JavaScript loading: Prefer DEFER Over ASYNC

ASYNC and DEFER are similar in that they allow scripts to load without blocking the HTML parser which means users see page content more quickly. But they do have differences:

  • Scripts loaded with ASYNC are parsed and executed immediately when the resource is done downloading. Whereas DEFER scripts don’t execute until the HTML document is done being parsed (AKA, DOM Interactive or performance.timing.domInteractive).
  • ASYNC scripts may load out-of-order, whereas DEFER scripts are executed in the order in which they appear in markup. (Although there’s a bug that makes DEFER’s execution order questionable in IE⇐9.)

Even though ASYNC and DEFER don’t block the HTML parser, they can block rendering. This happens when they’re parsed and executed before rendering is complete and take over the browser main thread. There’s nothing in the spec that says they have to wait until rendering is complete. ASYNC scripts execute immediately once they finish downloading, and DEFER scripts execute after DOM Interactive.

[…]

DEFER always causes script execution to happen at the same time as or later than ASYNC. Presumably, scripts are made DEFER or ASYNC because they are less important for the critical content on the page. Therefore, it’s better to use DEFER so that their execution happens outside of the main rendering time.

DEFER scripts can never block synchronous scripts, while ASYNC scripts might depending on how quickly they download. Synchronous scripts are typically made synchronous because they are important for the critical content on the page. Therefore, it’s better to use DEFER so that synchronous scripts are not blocked from executing and their critical work is completed more quickly.

GPU Animation: Doing It Right

Most people now know that modern web browsers use the GPU to render parts of web pages, especially ones with animation. For example, a CSS animation using the transform property looks much smoother than one using the left and top properties. But if you ask, “How do I get smooth animation from the GPU?” in most cases, you’ll hear something like, “Use transform: translateZ(0) or will-change: transform.”

These properties have become something like how we used zoom: 1 for Internet Explorer 6 (if you catch my drift) in terms of preparing animation for the GPU — or compositing, as browser vendors like to call it.

But sometimes animation that is nice and smooth in a simple demo runs very slowly on a real website, introduces visual artefacts or even crashes the browser. Why does this happen? How do we fix it?

[…]

  • Watch out for the number and size of composite layers from the very beginning — especially ones created by implicit compositing. The “Layers” panel in your browser’s development tools is your best friend.
  • Modern browsers make heavy use of compositing not just for animation but to optimize the painting of page elements. For example, position: fixed and the iframe and video elements use compositing.
  • The size of compositing layers is likely be more important than the number of layers. In some cases, the browser will try to reduce the number of composite layers (see the “Layer Squashing” section of “GPU Accelerated Compositing in Chrome“); this prevents so-called “layer explosion” and reduces memory consumption, especially when layers have large intersections. But sometimes, such optimization has a negative impact, such as when a very large texture consumes much more memory than a few small layers. To bypass this optimization, I add a small, unique translateZ() value to each element, such as translateZ(0.0001px), translateZ(0.0002px), etc. The browser will determine that the elements lie on different planes in the 3D space and, thus, skip optimization.
  • You can’t just add transform: translateZ(0) or will-change: transform to any random element to virtually improve animation performance or to get rid of visual artifacts. GPU compositing has many drawbacks and tradeoffs to be considered. When not used sparingly, compositing will decrease overall performance at best, and crash browsers at worst.

Allow me to remind you of the big disclaimer: There is no official specification for GPU compositing, and each browser solves the same problems differently. Some sections of this article may become obsolete in a few months.

Premonish - A library for predicting what element a user will interact with next.

This is really cool. This library seems to use the pointer’s trajectory to predict what you’re heading towards, and highlights the appropriate cube. I suspect this only works with pointers that can be detected as moving, like mice and styluses that have that capability, but probably does nothing useful with touch in most cases. Either way, give it a shot, it’s eerie. The most obvious use case for this is to pre-load a link if the user starts to head for it, but of course that has the side effect of possibly wasting the user’s data if they’re on a capped connection.