Network

Now THAT'S What I Call Service Worker!

Weekly Timber is a client of mine that provides logging services in central Wisconsin. For them, a fast website is vital. Their business is located in Waushara County, and like many rural stretches in the United States, network quality and reliability isn’t great.

[…]

Wisconsin has farmland for days, but it also has plenty of forests. When you need a company that cuts logs, Google is probably your first stop. How fast a given logging company’s website is might be enough to get you looking elsewhere if you’re left waiting too long on a crappy network connection.

I initially didn’t believe a Service Worker was necessary for Weekly Timber’s website. After all, if things were plenty fast to start with, why complicate things? On the other hand, knowing that my client services not just Waushara County, but much of central Wisconsin, even a barebones Service Worker could be the kind of progressive enhancement that adds resilience in the places it might be needed most.

The first Service Worker I wrote for my client’s website—which I’ll refer to henceforth as the “standard” Service Worker—used three well-documented caching strategies:

  1. Precache CSS and JavaScript assets for all pages when the Service Worker is installed when the window’s load event fires.
  2. Serve static assets out of CacheStorage if available. If a static asset isn’t in CacheStorage, retrieve it from the network, then cache it for future visits.
  3. For HTML assets, hit the network first and place the HTML response into CacheStorage. If the network is unavailable the next time the visitor arrives, serve the cached markup from CacheStorage.

These are neither new nor special strategies, but they provide two benefits:

  • Offline capability, which is handy when network conditions are spotty.
  • A performance boost for loading static assets.

[…]

A better, faster Service Worker

The web loves itself some “innovation,” which is a word we equally love to throw around. To me, true innovation isn’t when we create new frameworks or patterns solely for the benefit of developers, but whether those inventions benefit people who end up using whatever it is we slap up on the web. The priority of constituencies is a thing we ought to respect. Users above all else, always.

[…]

There are certainly other challenges, but it’ll be up to you to weigh the user-facing benefits versus the development costs. In my opinion, this approach has broad applicability in applications such as blogs, marketing websites, news websites, ecommerce, and other typical use cases.

All in all, though, it’s akin to the performance improvements and efficiency gains that you’d get from an SPA. Only the difference is that you’re not replacing time-tested navigation mechanisms and grappling with all the messiness that entails, but enhancing them. That’s the part I think is really important to consider in a world where client-side routing is all the rage.

Stimulus Handbook: Designing a Resilient User Interface

We should also expect people to have problems accessing our application from time to time. For example, intermittent network connectivity or CDN availability could prevent some or all of our JavaScript from loading.

It’s tempting to write off support for older browsers as not worth the effort, or to dismiss network issues as temporary glitches that resolve themselves after a refresh. But often it’s trivially easy to build features in a way that’s gracefully resilient to these types of problems.

This resilient approach, commonly known as progressive enhancement, is the practice of delivering web interfaces such that the basic functionality is implemented in HTML and CSS, and tiered upgrades to that base experience are layered on top with CSS and JavaScript, progressively, when their underlying technologies are supported by the browser.

The Fallacies of Distributed Computing (Applied to Front-End Performance)

In the mid-nineties, Laurence Peter Deutsch and colleagues at Sun Microsystems devised a list of what they called The Fallacies of Distributed Computing. These were a list of common assumptions that developers working on distributed systems were prone to making; mistakes that would impact the reliability, security, or resilience of their software. Those fallacies are as follows:

  1. The network is reliable.
  2. Latency is zero.
  3. Bandwidth is infinite.
  4. The network is secure.
  5. Topology doesn’t change.
  6. There is one administrator.
  7. Transport cost is zero.
  8. The network is homogeneous.

Reading over the eight fallacies listed out so plainly, they seem so obvious and clear that you’d struggle to believe that anyone would ever fall foul of them: of course we know bandwidth isn’t infinite! The thing is, these fallacies are obvious, but they don’t exist to teach us anything new; they exist to remind us of the fundamentals. Nor are they intended to explain or describe normal condition; they’re intended to remind us of worst case scenarios. They’re not saying that the network is always unreliable, or that latency is always high, or that bandwidth is always low: they’re saying that, sometimes, one or all of them will be sub-optimal. We should prepare for that.

Yet time and time again I see developers falling into the same old traps—making assumptions or overly-optimistic predictions about the conditions in which their apps will run. Developers frequently tell me things like most of our users are on wifi, or 4G is pretty much everywhere now, or people only ever visit the site from inside the office anyway. Even if this is statistically true—even if your analytics corroborate the claim—planning only for the best leaves you utterly unprepared for the worst. To paraphrase Jeremy, it’s not about how well it works, but how well it fails.

What if images don't arrive? A tale of a badly designed lazy loader

If you’re looking for an example of exactly what not to do in terms of front-end performance, I can’t think of a better one than this - they threw away a lot of the performance optimizations browsers give us for free in a bizarre attempt at improving page loading, which ended up doing the opposite:

I was recently conducting some exploratory work for a potential client when I hit upon a pretty severe flaw in a design decision they’d made: They’d built a responsive image lazyloader in JavaScript which, by design, worked by:

  1. immediately applying display: none; to the <body>;
  2. waiting until the very last of the page’s images had arrived;
  3. once they’d arrived, removing the display: none; and gradually fading the page into visibility.

Not only does this strike me as an unusual design decision—setting out to build a lazyloader and then having it intentionally block rendering—there had been no defensive strategy to answer the question: what if something goes wrong with image delivery?

‘Something wrong’ is exactly what happened. Due to an imperfect combination of:

  1. images being completely unoptimised, plus;
  2. a misconfiguration with their image transformation service leading to double downloads for all images;

…they’d managed to place 27.9MB of images onto the Critical Path. Almost 30MB of previously non-render blocking assets had just been turned into blocking ones on purpose with no escape hatch. Start render time was as high as 27.1s over a cable connection1.

If you’re going to build an image loader that hides the whole page until all images are ready, you must also ask yourself what if the images don’t arrive?

URI vs. URL: What's the Difference?

URI

A URI identifies a resource either by location, or a name, or both. More often than not, most of us use URIs that defines a location to a resource. The fact that a URI can identify a resources by both name and location has lead to a lot of the confusion in my opionion. A URI has two specializations known as URL and URN.

URN

A URI identifies a resource by name in a given namespace but not define how the resource maybe obtained. This type of URI is called a URN. You may see URNs used in XML Schema documents to define a namespace, usually using a syntax such as:

<xsd:schema xmlns="http://www.w3.org/2001/XMLSchema"
            xmlns:xsd="http://www.w3.org/2001/XMLSchema"
            targetNamespace="urn:example"

Here the targetNamespace use a URN. It defines an identifier to the namespace, but it does not define a location.

URL

A URL is a specialization of URI that defines the network location of a specific resource. Unlike a URN, the URL defines how the resource can be obtained. We use URLs every day in the form of http://damnhandy.com, etc. But a URL doesn’t have to be an HTTP URL, it can be ftp://damnhandy.com, smb://damnhandy.com, etc.

The Difference Between Them

So what is the difference between URI and URL? It’s not as clear cut as I would like, but here’s my stab at it:

A URI is an identifier for some resource, but a URL gives you specific information as to obtain that resource. A URI is a URL and as one commenter pointed out, it is now considered incorrect to use URL when describing applications. Generally, if the URL describes both the location and name of a resource, the term to use is URI. Since this is generally the case most of us encounter everyday, URI is the correct term.

1% or 13 million JavaScript requests per month to BuzzFeed time out

More evidence that we don’t fully control our web pages and that a non-zero number of page views don’t execute JavaScript fully or correctly, despite it being enabled.

Says @ianfeather at #AllDayHey — “our monitoring tells us that around 1% of requests for JavaScript on BuzzFeed timeout. That’s around 13 million requests per month.” A reminder if one were needed that we should design for resilience

Network based image loading using the Network Information API in Service Worker

Recently, Chromium improved their implementation of navigator.connection by adding three new attributes: effectiveType, downlink and rtt.

Before that, the available attributes were downLinkMax and type. With these two attributes you couldn’t really tell if the connection was fast or slow. The navigator.connection.type may tell us a user is using WiFi, but this doesn’t say anything about the real connection speed, as they may be using a hot spot and the connection is in fact 2G.

With the addition of effectiveType we are finally able to get the real connection type. There are four different types (slow-2g, 2g, 3g and 4g) and they are described this way by the Web Incubator Community Group:

slow-2g: The network is suited for small transfers only such as text-only pages.
2g: The network is suited for transfers of small images.
3g: The network is suited for transfers of large assets such as high resolution images, audio, and SD video.
4g: The network is suited for HD video, real-time video, etc.

Let’s see how we can improve user experience by delivering images based on available connection speed.

How many people have JavaScript disabled or unavailable?

In 2013, the UK’s Government Digital Service did an experiment to determine how many of their users weren’t receiving JavaScript. You might assume, as I did years ago, that the only two options were either that users would have JavaScript enabled, or have disabled it explicitly via their browser’s settings or an addon. Unfortunately, those aren’t the only two options. The majority of users that don’t receive JavaScript likely don’t do so by choice.

They found that 1.1% of users didn’t execute their JavaScript test. Of those, only 0.2% either explicitly disabled JavaScript or used a browser that didn’t support it. The remaining 0.9% did not indicate they didn’t support JavaScript, but were not executing the test.

‘noscript’ tags will only be followed by browsers that explicitly have JavaScript disabled or don’t support JavaScript at all. So a significant number of people had a JavaScript enabled browser but still didn’t run the scripts successfully.

It’s hard to know exactly why these browsers didn’t run the JavaScript, but a number of possible reasons are:

  • corporate or local blocking or stripping of JavaScript elements
  • existing JavaScript errors in the browser (ie from browser add-ons, toolbars etc)
  • page being left between requesting the base image and the script/noscript image
  • browsers that pre-load pages they incorrectly predict you will visit
  • network errors, especially on mobile devices
  • any undoubtedly many more I haven’t even thought about…

The takeaway? Do not assume that everyone has JavaScript, or will have it execute reliably even if they haven’t disabled it.

More Proof We Don't Control Our Web Pages

I’ve talked about this before: As web designers, we can’t trust the network. Sure, we have to contend with mobile data “dead zones” and dropped connections as our users move about throughout the day, but there’s a lot more to the network that’s beyond our control.

Here’s a roundup of some of my “favorite” network issue related headlines from the last few years:

Some of these issues can be avoided by serving content over HTTPS, but that still won’t enable you to bypass things like firewall blacklists (which led to the jQuery outage on Sky). Your best bet is to design defensively and make sure your users can still accomplish their goals on your site when some resources are missing or markup is altered.

We can’t control what happens to us in this world, we can only control our reaction to it.