When I tell coworkers of my unabated love for CSS they look at me like I’ve made an unfortunate life decision.
Sometimes I feel that developers, some of the most opinionated human beings on the planet, can only agree on one thing: that CSS is totally the worst.
But today I’m going to blow your mind. Today I’m going to try to convince you that not only is CSS one of the best technologies you use on a day-to-day basis, not only is CSS incredibly well designed, but that you should be thankful—thankful!—each and every time you open a
My argument is relatively simple: creating a comprehensive styling mechanism for building complex user interfaces is startlingly hard, and every alternative to CSS is much worse. Like, it’s not even close.
I think this sums up why I’m so impassioned about web development:
I don’t get excited about frameworks or languages—I get excited about potential; about playing my part in building a more inclusive web.
I care about making something that works well for someone that has only ever known the web by way of a five-year-old Android device, because that’s what they have—someone who might feel like they’re being left behind by the web a little more every day. I want to build something better for them.
Everyone who’s ever messed around with dates knows that they are terribly user-hostile — not only for software developers, but also for users. True, users will be able to tell you their date of birth or today’s date without trouble, but ask them to fill them out in a web form and they will encounter problems.
Month first, day first, or year first? And what about slashes, dashes, and other separators? Usually the website engineer has a strong personal preference and enforces it religiously upon unsuspecting users with stern and incomprehensible error messages in a lurid shade of red that are too tiny for anyone over 25 to read.
In theory, there’s a solution to this problem:
<input type=”date”>. It offers a special interface for picking dates, and it enforces a standard format for the value that’s sent to the server. Better still, the mobile browsers support it.
Here’s a test page for
<input type=”date”>and a few related types. Remember that some don’t work in some browsers.
I think it’s time that we trust browser vendors a bit more. The days of useless features for the sake of having a longer feature list are long gone. Nowadays, browser vendors try to add features that are actually useful for users, and are actually implemented by web developers. If a browser says it supports
<input type=”date”>, you should trust it to deliver a decent experience to its users. If it says it does not, and only in that case, you should use a custom widget instead.
Exposing long navigation menus on small screens is tricky. Hamburger menus are everywhere, although often discouraged. Displaying “just enough” navigation at every breakpoint can feel like an impossible task. This is especially true for template developers needing to accommodate an arbitrary number of menu items.
The Priority+ design pattern seeks to display as many items as possible given an arbitrary screen width, while making the rest accessible via a single click. I’ll go over the implementation I worked on at Goshen College that includes both dropdown menus and horizontal scrolling, which I’ve yet to find in the wild:
On mobile, the cards stack on top of each other.
Which has also been confusing for some folks. The thinking is: cards are always excerpts. The stacking (and thus hiding of some text) says: click/tap this to keep reading it.
I thought this was an excellent talk on the hard questions we should be asking ourselves as developers. Why do most people use closed, proprietary systems and devices, if the open web is so wonderful? Even as developers, we still use them ourselves, and depend on them. How can we be more empathetic to what the average user needs and wants? How can we lock open the web, so the future isn’t entirely dependent on huge corporations and services, which is where we seem to be heading?
Whatever you may think, it currently isn’t possible to reliably detect whether or not the current device has a touchscreen, from within the browser.
And it may be a long time before you can.
Let me explain why…
The browser environment is a sandbox. Your app’s code can only get at things the browser wants you to, in order to limit the damage a malicious website can cause.
Historically, two browser features have been used for “touchscreen detection”: media queries and touch APIs. But these are far from foolproof.
Walk with me.
Device width media queries
Mobiles have small screens and mobiles have touchscreens, so small screen equals touchscreen, right?
So, so very wrong. Large tablets and touchscreen laptops/desktops have clearly proven this wrong. Plus thousands of older mobile handset models had small non-touch screens. Unfortunately, sites applying the mantra “If it’s a small screen, it’s touch; if it’s a big screen, it’s mouse-driven” are now everywhere, leaving tablet and hybrid users with a rubbish experience.
If the browser supports events like
touchstart(or other events in the Touch Events API spec) it must be a touchscreen device, right?
Well, maybe. The problem is, no one ever said that a non-touch device can’t implement touch APIs, or at least have the event handlers in the DOM.
Chrome 24.0 shipped with these APIs always-on, so that they could start supporting touchscreens without having separate “touch” and “non-touch” builds. But loads of developers had already used detects like the example above, so it broke a lot of sites. The Chrome team “fixed” this with an update, which only enables touch APIs if a touch-capable input device is detected on start-up.
So we’re all good, right?
An API for an API
The browser is still quite a long way from the device itself. It only has access to the devices via the operating system, which has it’s own APIs for letting the browser know what devices are connected.
While these APIs appear to be fairly reliable for the most part, we recently came across cases whereby they’d give incorrect results in Chrome on Windows 8… they were reporting presence of a touchscreen (“digitizer”), when no touchscreen was connected.
Firefox also does some kind of similar switching and it appears to fail in the same cases as Chrome, so it looks like it might use the same cues – although I can’t profess to know for sure.
It appears certain settings and services can mess with the results these APIs give. I’ve only seen this in Windows 8 so far, but theoretically it could happen on any operating system.
Some versions of BlackBerry OS have also been known to leave the touch APIs permanently enabled on non-touch devices too.
So it looks like the browser doesn’t know with 100% confidence either. If the browser doesn’t know, how can our app know?
Drawing a blank
Assuming the presence of one of these touch APIs did mean the device had a touchscreen… does that mean that if such a touch API isn’t present then there definitely isn’t a touchscreen?
Of course not. The original iPhone (released in 2007) was the first device to support Touch Events, but touchscreens have been around in one form or another since the 1970s. Even recently, Nokia’s Symbian browser didn’t support touch events until version 8.2 was released last year.
IE 10 offers the (arguably superior) Pointer Events API on touch devices instead of the Touch Events spec, so would return
Neither Safari nor Opera has implemented either touch API in their desktop browsers yet, so they’ll draw a blank on touch devices too.
Without dedicated touch APIs, browsers just emulate mouse events… so there are loads of devices kicking about with touchscreens which you simply can’t detect using this kind of detection.
You’re doing it wrong
In my opinion, if you’re trying to “detect a touchscreen” in the first place, you’re probably making some dangerous assumptions.
So what should I do?
For layouts, assume everyone has a touchscreen. Mouse users can use large UI controls much more easily than touch users can use small ones. The same goes for hover states.
For events and interactions, assume anyone may have a touchscreen. Implement keyboard, mouse and touch interactions alongside each other, ensuring none block each other.
Most of this post is really useful (and also hilarious), but these stand out as possibly super helpful:
So, this is what I wanted to happen:
- Start downloading the data as soon as possible (without blocking the HTML).
- When both the script and the data are downloaded, and the HTML has been parsed, run the script which will breath life into the page.
Browser experts get your magnifying glasses out…
- In the
<head>I have a
<link rel="preload" … >for both the JSON and and the JS (I have
prefetchas well for browsers that don’t support preload yet)
- At the end of the body I load the application JS in a regular
- When the JS executes, it’s does a
fetch()for the JSON file, and
.then()kicks off the rendering of the React app.
That looks something like this.
Once any asset is on the way down, further calls to download it won’t go out to the network. The network calls look like this:
So unless I’m wrong, I see no reason to not preload everything at the top of the page.