Weekly Timber is a client of mine that provides logging services in central Wisconsin. For them, a fast website is vital. Their business is located in Waushara County, and like many rural stretches in the United States, network quality and reliability isn’t great.
Wisconsin has farmland for days, but it also has plenty of forests. When you need a company that cuts logs, Google is probably your first stop. How fast a given logging company’s website is might be enough to get you looking elsewhere if you’re left waiting too long on a crappy network connection.
I initially didn’t believe a Service Worker was necessary for Weekly Timber’s website. After all, if things were plenty fast to start with, why complicate things? On the other hand, knowing that my client services not just Waushara County, but much of central Wisconsin, even a barebones Service Worker could be the kind of progressive enhancement that adds resilience in the places it might be needed most.
The first Service Worker I wrote for my client’s website—which I’ll refer to henceforth as the “standard” Service Worker—used three well-documented caching strategies:
- Serve static assets out of
CacheStorageif available. If a static asset isn’t in
CacheStorage, retrieve it from the network, then cache it for future visits.
- For HTML assets, hit the network first and place the HTML response into
CacheStorage. If the network is unavailable the next time the visitor arrives, serve the cached markup from
These are neither new nor special strategies, but they provide two benefits:
- Offline capability, which is handy when network conditions are spotty.
- A performance boost for loading static assets.
A better, faster Service Worker
The web loves itself some “innovation,” which is a word we equally love to throw around. To me, true innovation isn’t when we create new frameworks or patterns solely for the benefit of developers, but whether those inventions benefit people who end up using whatever it is we slap up on the web. The priority of constituencies is a thing we ought to respect. Users above all else, always.
There are certainly other challenges, but it’ll be up to you to weigh the user-facing benefits versus the development costs. In my opinion, this approach has broad applicability in applications such as blogs, marketing websites, news websites, ecommerce, and other typical use cases.
All in all, though, it’s akin to the performance improvements and efficiency gains that you’d get from an SPA. Only the difference is that you’re not replacing time-tested navigation mechanisms and grappling with all the messiness that entails, but enhancing them. That’s the part I think is really important to consider in a world where client-side routing is all the rage.
It’s tempting to write off support for older browsers as not worth the effort, or to dismiss network issues as temporary glitches that resolve themselves after a refresh. But often it’s trivially easy to build features in a way that’s gracefully resilient to these types of problems.
In the mid-nineties, Laurence Peter Deutsch and colleagues at Sun Microsystems devised a list of what they called The Fallacies of Distributed Computing. These were a list of common assumptions that developers working on distributed systems were prone to making; mistakes that would impact the reliability, security, or resilience of their software. Those fallacies are as follows:
- The network is reliable.
- Latency is zero.
- Bandwidth is infinite.
- The network is secure.
- Topology doesn’t change.
- There is one administrator.
- Transport cost is zero.
- The network is homogeneous.
Reading over the eight fallacies listed out so plainly, they seem so obvious and clear that you’d struggle to believe that anyone would ever fall foul of them: of course we know bandwidth isn’t infinite! The thing is, these fallacies are obvious, but they don’t exist to teach us anything new; they exist to remind us of the fundamentals. Nor are they intended to explain or describe normal condition; they’re intended to remind us of worst case scenarios. They’re not saying that the network is always unreliable, or that latency is always high, or that bandwidth is always low: they’re saying that, sometimes, one or all of them will be sub-optimal. We should prepare for that.
Yet time and time again I see developers falling into the same old traps—making assumptions or overly-optimistic predictions about the conditions in which their apps will run. Developers frequently tell me things likemost of our users are on wifi, or4G is pretty much everywhere now, orpeople only ever visit the site from inside the office anyway. Even if this is statistically true—even if your analytics corroborate the claim—planning only for the best leaves you utterly unprepared for the worst. To paraphrase Jeremy, it’s not about how well it works, but how well it fails.
If you’re looking for an example of exactly what not to do in terms of front-end performance, I can’t think of a better one than this - they threw away a lot of the performance optimizations browsers give us for free in a bizarre attempt at improving page loading, which ended up doing the opposite:
- immediately applying
display: none;to the
- waiting until the very last of the page’s images had arrived;
- once they’d arrived, removing the
display: none;and gradually fading the page into visibility.
Not only does this strike me as an unusual design decision—setting out to build a lazyloader and then having it intentionally block rendering—there had been no defensive strategy to answer the question: what if something goes wrong with image delivery?
‘Something wrong’ is exactly what happened. Due to an imperfect combination of:
- images being completely unoptimised, plus;
- a misconfiguration with their image transformation service leading to double downloads for all images;
…they’d managed to place 27.9MB of images onto the Critical Path. Almost 30MB of previously non-render blocking assets had just been turned into blocking ones on purpose with no escape hatch. Start render time was as high as 27.1s over a cable connection1.
If you’re going to build an image loader that hides the whole page until all images are ready, you must also ask yourself what if the images don’t arrive?
A URI identifies a resource either by location, or a name, or both. More often than not, most of us use URIs that defines a location to a resource. The fact that a URI can identify a resources by both name and location has lead to a lot of the confusion in my opionion. A URI has two specializations known as URL and URN.
A URI identifies a resource by name in a given namespace but not define how the resource maybe obtained. This type of URI is called a URN. You may see URNs used in XML Schema documents to define a namespace, usually using a syntax such as:<xsd:schema xmlns="http://www.w3.org/2001/XMLSchema" xmlns:xsd="http://www.w3.org/2001/XMLSchema" targetNamespace="urn:example"
targetNamespaceuse a URN. It defines an identifier to the namespace, but it does not define a location.
A URL is a specialization of URI that defines the network location of a specific resource. Unlike a URN, the URL defines how the resource can be obtained. We use URLs every day in the form of http://damnhandy.com, etc. But a URL doesn’t have to be an HTTP URL, it can be ftp://damnhandy.com, smb://damnhandy.com, etc.
The Difference Between Them
So what is the difference between URI and URL? It’s not as clear cut as I would like, but here’s my stab at it:
A URI is an identifier for some resource, but a URL gives you specific information as to obtain that resource. A URI is a URL and as one commenter pointed out, it is now considered incorrect to use URL when describing applications. Generally, if the URL describes both the location and name of a resource, the term to use is URI. Since this is generally the case most of us encounter everyday, URI is the correct term.
Recently, Chromium improved their implementation of
navigator.connectionby adding three new attributes:
Before that, the available attributes were
type. With these two attributes you couldn’t really tell if the connection was fast or slow. The
navigator.connection.typemay tell us a user is using WiFi, but this doesn’t say anything about the real connection speed, as they may be using a hot spot and the connection is in fact 2G.
With the addition of effectiveType we are finally able to get the real connection type. There are four different types (slow-2g, 2g, 3g and 4g) and they are described this way by the Web Incubator Community Group:
slow-2g: The network is suited for small transfers only such as text-only pages.
2g: The network is suited for transfers of small images.
3g: The network is suited for transfers of large assets such as high resolution images, audio, and SD video.
4g: The network is suited for HD video, real-time video, etc.
Let’s see how we can improve user experience by delivering images based on available connection speed.
- page being left between requesting the base image and the script/noscript image
- browsers that pre-load pages they incorrectly predict you will visit
- network errors, especially on mobile devices
- any undoubtedly many more I haven’t even thought about…
I’ve talked about this before: As web designers, we can’t trust the network. Sure, we have to contend with mobile data “dead zones” and dropped connections as our users move about throughout the day, but there’s a lot more to the network that’s beyond our control.
Here’s a roundup of some of my “favorite” network issue related headlines from the last few years:
- Sky Broadband misclassified the jQuery CDN as a malware site and broke much of the web for their users.
- Comcast admitted to injecting self-promotional advertising into web pages served by their Xfinity routers. (They have also been called out for artificially inflating subscriber bandwidth usage with their own crap.)
- United was recently called out for blocking access to the New York Times on their in-flight Wi-Fi.
- Samsung smart TVs were found to be injecting video ads into video streaming apps.
Some of these issues can be avoided by serving content over HTTPS, but that still won’t enable you to bypass things like firewall blacklists (which led to the jQuery outage on Sky). Your best bet is to design defensively and make sure your users can still accomplish their goals on your site when some resources are missing or markup is altered.
We can’t control what happens to us in this world, we can only control our reaction to it.