Network

URI vs. URL: What's the Difference?

URI

A URI identifies a resource either by location, or a name, or both. More often than not, most of us use URIs that defines a location to a resource. The fact that a URI can identify a resources by both name and location has lead to a lot of the confusion in my opionion. A URI has two specializations known as URL and URN.

URN

A URI identifies a resource by name in a given namespace but not define how the resource maybe obtained. This type of URI is called a URN. You may see URNs used in XML Schema documents to define a namespace, usually using a syntax such as:

<xsd:schema xmlns="http://www.w3.org/2001/XMLSchema"
            xmlns:xsd="http://www.w3.org/2001/XMLSchema"
            targetNamespace="urn:example"

Here the targetNamespace use a URN. It defines an identifier to the namespace, but it does not define a location.

URL

A URL is a specialization of URI that defines the network location of a specific resource. Unlike a URN, the URL defines how the resource can be obtained. We use URLs every day in the form of http://damnhandy.com, etc. But a URL doesn’t have to be an HTTP URL, it can be ftp://damnhandy.com, smb://damnhandy.com, etc.

The Difference Between Them

So what is the difference between URI and URL? It’s not as clear cut as I would like, but here’s my stab at it:

A URI is an identifier for some resource, but a URL gives you specific information as to obtain that resource. A URI is a URL and as one commenter pointed out, it is now considered incorrect to use URL when describing applications. Generally, if the URL describes both the location and name of a resource, the term to use is URI. Since this is generally the case most of us encounter everyday, URI is the correct term.

Tags: 

1% or 13 million JavaScript requests per month to BuzzFeed time out

Ian Feather standing on a stage in front of a graphic that reads "13 million JavaScript requests will fail".

More evidence that we don’t fully control our web pages and that a non-zero number of page views don’t execute JavaScript fully or correctly, despite it being enabled.

Says @ianfeather at #AllDayHey — “our monitoring tells us that around 1% of requests for JavaScript on BuzzFeed timeout. That’s around 13 million requests per month.” A reminder if one were needed that we should design for resilience

Tags: 

Network based image loading using the Network Information API in Service Worker

Recently, Chromium improved their implementation of navigator.connection by adding three new attributes: effectiveType, downlink and rtt.

Before that, the available attributes were downLinkMax and type. With these two attributes you couldn’t really tell if the connection was fast or slow. The navigator.connection.type may tell us a user is using WiFi, but this doesn’t say anything about the real connection speed, as they may be using a hot spot and the connection is in fact 2G.

With the addition of effectiveType we are finally able to get the real connection type. There are four different types (slow-2g, 2g, 3g and 4g) and they are described this way by the Web Incubator Community Group:

slow-2g: The network is suited for small transfers only such as text-only pages.
2g: The network is suited for transfers of small images.
3g: The network is suited for transfers of large assets such as high resolution images, audio, and SD video.
4g: The network is suited for HD video, real-time video, etc.

Let’s see how we can improve user experience by delivering images based on available connection speed.

Tags: 

How many people have JavaScript disabled or unavailable?

In 2013, the UK’s Government Digital Service did an experiment to determine how many of their users weren’t receiving JavaScript. You might assume, as I did years ago, that the only two options were either that users would have JavaScript enabled, or have disabled it explicitly via their browser’s settings or an addon. Unfortunately, those aren’t the only two options. The majority of users that don’t receive JavaScript likely don’t do so by choice.

They found that 1.1% of users didn’t execute their JavaScript test. Of those, only 0.2% either explicitly disabled JavaScript or used a browser that didn’t support it. The remaining 0.9% did not indicate they didn’t support JavaScript, but were not executing the test.

‘noscript’ tags will only be followed by browsers that explicitly have JavaScript disabled or don’t support JavaScript at all. So a significant number of people had a JavaScript enabled browser but still didn’t run the scripts successfully.

It’s hard to know exactly why these browsers didn’t run the JavaScript, but a number of possible reasons are:

  • corporate or local blocking or stripping of JavaScript elements
  • existing JavaScript errors in the browser (ie from browser add-ons, toolbars etc)
  • page being left between requesting the base image and the script/noscript image
  • browsers that pre-load pages they incorrectly predict you will visit
  • network errors, especially on mobile devices
  • any undoubtedly many more I haven’t even thought about…

The takeaway? Do not assume that everyone has JavaScript, or will have it execute reliably even if they haven’t disabled it.

Tags: 

More Proof We Don't Control Our Web Pages

I’ve talked about this before: As web designers, we can’t trust the network. Sure, we have to contend with mobile data “dead zones” and dropped connections as our users move about throughout the day, but there’s a lot more to the network that’s beyond our control.

Here’s a roundup of some of my “favorite” network issue related headlines from the last few years:

Some of these issues can be avoided by serving content over HTTPS, but that still won’t enable you to bypass things like firewall blacklists (which led to the jQuery outage on Sky). Your best bet is to design defensively and make sure your users can still accomplish their goals on your site when some resources are missing or markup is altered.

We can’t control what happens to us in this world, we can only control our reaction to it.

Tags: