My First MP3

Somebody shared with me a tweet about the tragedy of being a Gen X’er and having to buy all your music again and again as formats evolve. Somebody else shared with me Kyla La Grange‘s cover of a particular song .Together… these reminded me that I’ve never told you the story of my first MP31

Screenshot of tweet by @bewgtweets posted Oct 17, 2021, reading: If you want to know why Gen X’ers are always mad it’s because we had to replace our record collections with a tape collection that was then replaced with a cd collection that was then replaced with MP3’s and damn it how many time must I pay to listen to grunge
I didn’t/don’t own much vinyl – perhaps mostly because I had a tape deck in my bedroom years before a record player – but I’ve felt this pain. And don’t get me started on the videogames I’ve paid for multiple times.

In the Summer of 1995 I bought the CD single of the (still excellent!) Set You Free by N-Trance.2 I’d heard about this new-fangled “MP3” audio format, so soon afterwards I decided to rip a copy of the song to my PC.

I was using a 66MHz 486SX CPU, and without an embedded FPU I didn’t quite have the spare processing power to rip-and-encode in a single pass.3 So instead I first ripped to an uncompressed PCM .wav file and then performed the encoding: the former step was done almost in real-time (I listened to the track as it ripped!), about 7 minutes. The latter step took about 20 minutes.

So… about half an hour in total, to rip a single song.

Dan, as a teenager, sits at a desk with his hand to his chin. In the foreground, a beiege two-button wired ball-type computer mouse rests on the corner of the desk. Dan is wearing a black t-shirt with a red devil face printed onto it.
Progress bar, you say? I’ll just sit here and wait then, I guess. Actual contemporary-ish photo.

Creating a (what would now be considered an apalling) 32kHz mono-channel file, this meant that I briefly stored both a 27MB wave file and the final ~4MB MP3 file. 31MB might not sound huge, but I only had a total of 145MB of hard drive space at the time, so 31MB consumed over a fifth of my entire fixed storage! Even after deleting the intermediary wave file I was left with a single song consuming around 3% of my space, which is mind-boggling to think about in hindsight.

But it felt like magic. I called my friend Gary to tell him about it. “This is going to be massive!” I said. At the time, I meant for techy people: I could imagine a future in which, with more hard drive space, I’d keep all my music this way… or else bundle entire artists onto writable CDs in this new format, making albums obsolete. I never considered that over the coming decade or so the format would enter the public consciousness, let alone that it’d take off like it did.

A young man in jeans and a blue coat stands on the patio in the back garden of a terraced house, dropping a half-brick onto the floor. In the background, an unused rabbit hutch and a dustbin can be seen. The photo is clearly taken using a flash, at night.
If you’re thinking of Gary and I as the kind of reprobates who helped bring on the golden age of music piracy… I’d like to distract you with a bigger show of yobbish behaviour in the form of this photo from the day we played at dropping half-bricks onto starter pistol ammunition.

The MP3 file I produced had a fault. Most of the way through the encoding process, I got bored and ran another program, and this must’ve interfered with the stream because there was an audible “blip” noise about 30 seconds from the end of the track. You’d have to be listening carefully to hear it, or else know what you were looking for, but it was there. I didn’t want to go through the whole process again, so I left it.

But that artefact uniquely identified that copy of what was, in the end, a popular song to have in your digital music collection. As the years went by and I traded MP3 files in bulk at LAN parties or on CD-Rs or, on at least one ocassion, on an Iomega Zip disk (remember those?), I’d ocassionally see N-Trance - (Only Love Can) Set You Free.mp34 being passed around and play it, to see if it was “my” copy.

Sometimes the ID3 tags had been changed because for example the previous owner had decided it deserved to be considered Genre: Dance instead of Genre: Trance5. But I could still identify that file because of the audio fingerprint, distinct to the first MP3 I ever created.

I still had that file when I went to university (where it occupied a smaller proportion of my hard drive space) and hearing that distinctive “blip” would remind me about the ordeal that was involved in its creation. I don’t have it any more, but perhaps somebody else still does.

Footnotes

1 I might never have told this story on my blog, but eagle-eyed readers may remember that I’ve certainly hinted at it before now.

2 Rewatching that music video, I’m struck by a recollection of how crazy popular crossfades were on 1990s dance music videos. More than just a transition, I’m pretty sure that most of the frames of that video are mid-crossfade: it feels like I’m watching Kelly Llorenna hanging out of a sunroof but I accidentally left one of my eyeballs in a smoky nightclub and can still see out of it as well.

3 I initially tried to convert directly from red book format to an MP3 file, but the encoding process was too slow and the CD drive’s buffer filled up and didn’t get drained by the processor, which was still presumably bogged down with framing or fourier-transforming earlier parts of the track. The CD drive reasonably assumed that it wasn’t actually being used and spun-down the drive motor, and this caused it to lose its place in the track, killing the whole process and leaving me with about a 40 second recording.

4 Yes, that filename isn’t quite the correct title. I was wrong.

5 No, it’s clearly trance. They were wrong.

× ×

Can I use HTTP Basic Auth in URLs?

Web standards sometimes disappear

Sometimes a web standard disappears quickly at the whim of some company, perhaps to a great deal of complaint (and at least one joke).

But sometimes, they disappear slowly, like this kind of web address:

http://username:password@example.com/somewhere

If you’ve not seen a URL like that before, that’s fine, because the answer to the question “Can I still use HTTP Basic Auth in URLs?” is, I’m afraid: no, you probably can’t.

But by way of a history lesson, let’s go back and look at what these URLs were, why they died out, and how web browsers handle them today. Thanks to Ruth who asked the original question that inspired this post.

Basic authentication

The early Web wasn’t built for authentication. A resource on the Web was theoretically accessible to all of humankind: if you didn’t want it in the public eye, you didn’t put it on the Web! A reliable method wouldn’t become available until the concept of state was provided by Netscape’s invention of HTTP cookies in 1994, and even that wouldn’t see widespread for several years, not least because implementing a CGI (or similar) program to perform authentication was a complex and computationally-expensive option for all but the biggest websites.

Comic showing a conversation between a web browser and server. Browser: "Show me that page. (GET /)" Server: "No, take a ticket and fill this form. (Redirect, Set-Cookie)" Browser: "I've filled your form and here's your ticket (POST request with Cookie set)" Server: "Okay, Keep hold of your ticket. (Redirect, Set-Cookie)" Browser: "Show me that page, here's my ticket. (GET /, Cookie:)"
A simplified view of the form-and-cookie based authentication system used by virtually every website today, but which was too computationally-expensive for many sites in the 1990s.

1996’s HTTP/1.0 specification tried to simplify things, though, with the introduction of the WWW-Authenticate header. The idea was that when a browser tried to access something that required authentication, the server would send a 401 Unauthorized response along with a WWW-Authenticate header explaining how the browser could authenticate itself. Then, the browser would send a fresh request, this time with an Authorization: header attached providing the required credentials. Initially, only “basic authentication” was available, which basically involved sending a username and password in-the-clear unless SSL (HTTPS) was in use, but later, digest authentication and a host of others would appear.

Comic showing conversation between web browser and server. Browser: "Show me that page (GET /)" Server: "No. Send me credentials. (HTTP 401, WWW-Authenticate)" Browser: "Show me that page. I enclose credentials (Authorization)" Server: "Okay (HTTP 200)"
For all its faults, HTTP Basic Authentication (and its near cousins) are certainly elegant.

Webserver software quickly added support for this new feature and as a result web authors who lacked the technical know-how (or permission from the server administrator) to implement more-sophisticated authentication systems could quickly implement HTTP Basic Authentication, often simply by adding a .htaccess file to the relevant directory. .htaccess files would later go on to serve many other purposes, but their original and perhaps best-known purpose – and the one that gives them their name – was access control.

Credentials in the URL

A separate specification, not specific to the Web (but one of Tim Berners-Lee’s most important contributions to it), described the general structure of URLs as follows:

<scheme>://<username>:<password>@<host>:<port>/<url-path>#<fragment>

At the time that specification was written, the Web didn’t have a mechanism for passing usernames and passwords: this general case was intended only to apply to protocols that did have these credentials. An example is given in the specification, and clarified with “An optional user name. Some schemes (e.g., ftp) allow the specification of a user name.”

But once web browsers had WWW-Authenticate, virtually all of them added support for including the username and password in the web address too. This allowed for e.g. hyperlinks with credentials embedded in them, which made for very convenient bookmarks, or partial credentials (e.g. just the username) to be included in a link, with the user being prompted for the password on arrival at the destination. So far, so good.

Comic showing conversation between web browser and server. Browser asks for a page, providing an Authorization: header outright; server responds with the page immediately.
Encoding authentication into the URL provided an incredible shortcut at a time when Web round-trip times were much longer owing to higher latencies and no keep-alives.

This is why we can’t have nice things

The technique fell out of favour as soon as it started being used for nefarious purposes. It didn’t take long for scammers to realise that they could create links like this:

https://YourBank.com@HackersSite.com/

Everything we were teaching users about checking for “https://” followed by the domain name of their bank… was undermined by this user interface choice. The poor victim would actually be connecting to e.g. HackersSite.com, but a quick glance at their address bar would leave them convinced that they were talking to YourBank.com!

Theoretically: widespread adoption of EV certificates coupled with sensible user interface choices (that were never made) could have solved this problem, but a far simpler solution was just to not show usernames in the address bar. Web developers were by now far more excited about forms and cookies for authentication anyway, so browsers started curtailing the “credentials in addresses” feature.

Internet Explorer window showing https://YourBank.com@786590867/ in the address bar.
Users trained to look for “https://” followed by the site they wanted would often fall for scams like this one: the real domain name is after the @-sign. (This attacker is also using dword notation to obfuscate their IP address; this dated technique wasn’t often employed alongside this kind of scam, but it’s another historical oddity I enjoy so I’m shoehorning it in.)

(There are other reasons this particular implementation of HTTP Basic Authentication was less-than-ideal, but this reason is the big one that explains why things had to change.)

One by one, browsers made the change. But here’s the interesting bit: the browsers didn’t always make the change in the same way.

How different browsers handle basic authentication in URLs

Let’s examine some popular browsers. To run these tests I threw together a tiny web application that outputs the Authorization: header passed to it, if present, and can optionally send a 401 Unauthorized response along with a WWW-Authenticate: Basic realm="Test Site" header in order to trigger basic authentication. Why both? So that I can test not only how browsers handle URLs containing credentials when an authentication request is received, but how they handle them when one is not. This is relevant because some addresses – often API endpoints – have optional HTTP authentication, and it’s sometimes important for a user agent (albeit typically a library or command-line one) to pass credentials without first being prompted.

In each case, I tried each of the following tests in a fresh browser instance:

  1. Go to http://<username>:<password>@<domain>/optional (authentication is optional).
  2. Go to http://<username>:<password>@<domain>/mandatory (authentication is mandatory).
  3. Experiment 1, then f0llow relative hyperlinks (which should correctly retain the credentials) to /mandatory.
  4. Experiment 2, then follow relative hyperlinks to the /optional.

I’m only testing over the http scheme, because I’ve no reason to believe that any of the browsers under test treat the https scheme differently.

Chromium desktop family

Chrome at an "Auth Optional" page, showing no header sent.Chrome 93 and Edge 93 both immediately suppressed the username and password from the address bar, along with the “http://” as we’ve come to expect of them. Like the “http://”, though, the plaintext username and password are still there. You can retrieve them by copy-pasting the entire address.

Opera 78 similarly suppressed the username, password, and scheme, but didn’t retain the username and password in a way that could be copy-pasted out.

Authentication was passed only when landing on a “mandatory” page; never when landing on an “optional” page. Refreshing the page or re-entering the address with its credentials did not change this.

Navigating from the “optional” page to the “mandatory” page using only relative links retained the username and password and submitted it to the server when it became mandatory, even Opera which didn’t initially appear to retain the credentials at all.

Navigating from the “mandatory” to the “optional” page using only relative links, or even entering the “optional” page address with credentials after visiting the “mandatory” page, does not result in authentication being passed to the “optional” page. However, it’s interesting to note that once authentication has occurred on a mandatory page, pressing enter at the end of the address bar on the optional page, with credentials in the address bar (whether visible or hidden from the user) does result in the credentials being passed to the optional page! They continue to be passed on each subsequent load of the “optional” page until the browsing session is ended.

Firefox desktop

Firefox window with popup reading "You are about to log in to the site 192.168.0.11 with the username alpha, but the web site does not require authentication. This may be an attempt to trick you."Firefox 91 does a clever thing very much in-line with its image as a browser that puts decision-making authority into the hands of its user. When going to the “optional” page first it presents a dialog, warning the user that they’re going to a site that does not specifically request a username, but they’re providing one anyway. If the user says that no, navigation ceases (the GET request for the page takes place the same either way; this happens before the dialog appears). Strangely: regardless of whether the user selects yes or no, the credentials are not passed on the “optional” page. The credentials (although not the “http://”) appear in the address bar while the user makes their decision.

Similar to Opera, the credentials do not appear in the address bar thereafter, but they’re clearly still being stored: if the refresh button is pressed the dialog appears again. It does not appear if the user selects the address bar and presses enter.

Firefox dialog reading "You are about to log in to the site 192.168.0.11 with the username alpha".Similarly, going to the “mandatory” page in Firefox results in an informative dialog warning the user that credentials are being passed. I like this approach: not only does it help protect the user from the use of authentication as a tracking technique (an old technique that I’ve not seen used in well over a decade, mind), it also helps the user be sure that they’re logging in using the account they mean to, when following a link for that purpose. Again, clicking cancel stops navigation, although the initial request (with no credentials) and the 401 response has already occurred.

Visiting any page within the scope of the realm of the authentication after visiting the “mandatory” page results in credentials being sent, whether or not they’re included in the address. This is probably the most-true implementation to the expectations of the standard that I’ve found in a modern graphical browser.

Safari desktop

Safari showing a dialog "Log in" / "Your password will be sent unencrypted."Safari 14 never displays or uses credentials provided via the web address, whether or not authentication is mandatory. Mandatory authentication is always met by a pop-up dialog, even if credentials were provided in the address bar. Boo!

Once passed, credentials are later provided automatically to other addresses within the same realm (i.e. optional pages).

Older browsers

Let’s try some older browsers.

Internet Explorer 8 showing the error message "Windows cannot find http://alpha:beta@10.0.2.2/optional. Check the spelling and try again."From version 7 onwards – right up to the final version 11 – Internet Explorer fails to even recognise addresses with authentication credentials in as legitimate web addresses, regardless of whether or not authentication is requested by the server. It’s easy to assume that this is yet another missing feature in the browser we all love to hate, but it’s interesting to note that credentials-in-addresses is permitted for ftp:// URLs…

Internet Explorer 5 showing credentials in the address bar being passed to the server.…and if you go back a little way, Internet Explorer 6 and below supported credentials in the address bar pretty much as you’d expect based on the standard. The error message seen in IE7 and above is a deliberate design decision, albeit a somewhat knee-jerk reaction to the security issues posed by the feature (compare to the more-careful approach of other browsers).

These older versions of IE even (correctly) retain the credentials through relative hyperlinks, allowing them to be passed when they become mandatory. They’re not passed on optional pages unless a mandatory page within the same realm has already been encountered.

Netscape Communicator 4.7 showing credentials in a URL, passed to a server.Pre-Mozilla Netscape behaved the same way. Truly this was the de facto standard for a long period on the Web, and the varied approaches we see today are the anomaly. That’s a strange observation to make, considering how much the Web of the 1990s was dominated by incompatible implementations of different Web features (I’ve written about the <blink> and <marquee> tags before, which was perhaps the most-visible division between the Microsoft and Netscape camps, but there were many, many more).

Screenshot showing Netscape 7.2, with a popup saying "You are about to log in to site 192.168.0.11 with the username alpha, but the website does not require authenticator. This may be an attempt to trick you." The username and password are visible in the address bar.Interestingly: by Netscape 7.2 the browser’s behaviour had evolved to be the same as modern Firefox’s, except that it still displayed the credentials in the address bar for all to see.

Screenshot of Opera 5 showing credentials in a web address with the password masked, being passed to the server on an optional page.Now here’s a real gem: pre-Chromium Opera. It would send credentials to “mandatory” pages and remember them for the duration of the browsing session, which is great. But it would also send credentials when passed in a web address to “optional” pages. However, it wouldn’t remember them on optional pages unless they remained in the address bar: this feels to me like an optimum balance of features for power users. Plus, it’s one of very few browsers that permitted you to change credentials mid-session: just by changing them in the address bar! Most other browsers, even to this day, ignore changes to HTTP Authentication credentials, which was sometimes be a source of frustration back in the day.

Finally, classic Opera was the only browser I’ve seen to mask the password in the address bar, turning it into a series of asterisks. This ensures the user knows that a password was used, but does not leak any sensitive information to shoulder-surfers (the length of the “masked” password was always the same length, too, so it didn’t even leak the length of the password). Altogether a spectacular design and a great example of why classic Opera was way ahead of its time.

The Command-Line

Most people using web addresses with credentials embedded within them nowadays are probably working with code, APIs, or the command line, so it’s unsurprising to see that this is where the most “traditional” standards-compliance is found.

I was unsurprised to discover that giving curl a username and password in the URL meant that username and password was sent to the server (using Basic authentication, of course, if no authentication was requested):

$ curl http://alpha:beta@localhost/optional
Header: Basic YWxwaGE6YmV0YQ==
$ curl http://alpha:beta@localhost/mandatory
Header: Basic YWxwaGE6YmV0YQ==

However, wget did catch me out. Hitting the same addresses with wget didn’t result in the credentials being sent except where it was mandatory (i.e. where a HTTP 401 response and a WWW-Authenticate: header was received on the initial attempt). To force wget to send credentials when they haven’t been asked-for requires the use of the --http-user and --http-password switches:

$ wget http://alpha:beta@localhost/optional -qO-
Header:
$ wget http://alpha:beta@localhost/mandatory -qO-
Header: Basic YWxwaGE6YmV0YQ==

lynx does a cute and clever thing. Like most modern browsers, it does not submit credentials unless specifically requested, but if they’re in the address bar when they become mandatory (e.g. because of following relative hyperlinks or hyperlinks containing credentials) it prompts for the username and password, but pre-fills the form with the details from the URL. Nice.

Lynx browser following a link from an optional-authentication to a mandatory-authentication page. The browser prompts for a username but it's pre-filled with the one provided by the URL.

What’s the status of HTTP (Basic) Authentication?

HTTP Basic Authentication and its close cousin Digest Authentication (which overcomes some of the security limitations of running Basic Authentication over an unencrypted connection) is very much alive, but its use in hyperlinks can’t be relied upon: some browsers (e.g. IE, Safari) completely munge such links while others don’t behave as you might expect. Other mechanisms like Bearer see widespread use in APIs, but nowhere else.

The WWW-Authenticate: and Authorization: headers are, in some ways, an example of the best possible way to implement authentication on the Web: as an underlying standard independent of support for forms (and, increasingly, Javascript), cookies, and complex multi-part conversations. It’s easy to imagine an alternative timeline where these standards continued to be collaboratively developed and maintained and their shortfalls – e.g. not being able to easily log out when using most graphical browsers! – were overcome. A timeline in which one might write a login form like this, knowing that your e.g. “authenticate” attributes would instruct the browser to send credentials using an Authorization: header:

<form method="get" action="/" authenticate="Basic">
<label for="username">Username:</label> <input type="text" id="username" authenticate="username">
<label for="password">Password:</label> <input type="text" id="password" authenticate="password">
<input type="submit" value="Log In">
</form>

In such a world, more-complex authentication strategies (e.g. multi-factor authentication) could involve encoding forms as JSON. And single-sign-on systems would simply involve the browser collecting a token from the authentication provider and passing it on to the third-party service, directly through browser headers, with no need for backwards-and-forwards redirects with stacks of information in GET parameters as is the case today. Client-side certificates – long a powerful but neglected authentication mechanism in their own right – could act as first class citizens directly alongside such a system, providing transparent second-factor authentication wherever it was required. You wouldn’t have to accept a tracking cookie from a site in order to log in (or stay logged in), and if your browser-integrated password safe supported it you could log on and off from any site simply by toggling that account’s “switch”, without even visiting the site: all you’d be changing is whether or not your credentials would be sent when the time came.

The Web has long been on a constant push for the next new shiny thing, and that’s sometimes meant that established standards have been neglected prematurely or have failed to evolve for longer than we’d have liked. Consider how long it took us to get the <video> and <audio> elements because the “new shiny” Flash came to dominate, how the Web Payments API is only just beginning to mature despite over 25 years of ecommerce on the Web, or how we still can’t use Link: headers for all the things we can use <link> elements for despite them being semantically-equivalent!

The new model for Web features seems to be that new features first come from a popular JavaScript implementation, and then eventually it evolves into a native browser feature: for example HTML form validations, which for the longest time could only be done client-side using scripting languages. I’d love to see somebody re-think HTTP Authentication in this way, but sadly we’ll never get a 100% solution in JavaScript alone: (distributed SSO is almost certainly off the table, for example, owing to cross-domain limitations).

Or maybe it’s just a problem that’s waiting for somebody cleverer than I to come and solve it. Want to give it a go?

× × × × × × × × × × × × × ×

Get Lost on the Web

Get lost

I got lost on the Web this week, but it was harder than I’d have liked.

The Ypsilanti Water Tower, at the intersection of Washtenaw Avenue and Cross Street, Ypsilanti, Michigan. The tower is listed in the National Register of Historic Places, and is a National Historic Civil Engineering Landmark. An American flag and a Greek flag are flying, and a bust of the Greek general, Demetrios Ypsilantis (also commonly spelled "Demetrius Ypsilanti"), for whom the city is named, is in the foreground. Photo by Dwight Burdette, used under a Creative Commons license.
Now that’s a suggestive erection. Photo by Dwight Burdette.

There was a discussion this week in the Abnib WhatsApp group about whether a particular illustration of a farm was full of phallic imagery (it was). This left me wondering if anybody had ever tried to identify the most-priapic buildings in the world. Of course towers often look at least a little bit like their architects were compensating for something, but some – like the Ypsilanti Water Tower in Michigan pictured above – go further than others.

I quickly found the Wikipedia article for the Most Phallic Building Contest in 2003, so that was my jumping-off point. It’s easy enough to get lost on Wikipedia alone, but sometimes you feel the need for a primary source. I was delighted to discover that the web pages for the Most Phallic Building Contest are still online 18 years after the competition ended!

1969 shot tower at Tower Wharf, Bristol. Photo by Anthony O'Neil, used under a Creative Commons license.
The Cheese Lane Shot Tower in Bristol – politely described as a “Q-tip” shape – was built in 1969 to replace the world’s first shot tower elsewhere in the city. Photo by Anthony O’Neil.

Link rot is a serious problem on the Web, to such an extent that it’s pleasing when it isn’t present. The other year, for example, I revisited a post I wrote in 2004 and was pleased to find that a linked 2003 article by Nicholas ‘Aquarion’ Avenell is still alive at its original address! Contrast Jonathan Ames, the author/columnist/screenwriter who created the Most Phallic Building Contest until as late as 2011 before eventually letting his  site and blog lapse and fall off the Internet. It takes effort to keep Web content alive, but it’s worth more effort than it’s sometimes given.

Anyway: a shot tower in Bristol – a part of the UK with a long history of leadworking – was among the latecomer entrants to the competition, and seeing this curious building reminded me about something I’d read, once, about the manufacture of lead shot. The idea (invented in Bristol by a plumber called William Watts) is that you pour molten lead through a sieve at the top of a tower, let surface tension pull it into spherical drops as it falls, and eventually catch it in a cold water bath to finish solidifying it. I’d seen an animation of the process, but I’d never seen a video of it, so I went about finding one.

Cross-section animation showing lead shot being poured into a sieve, separating into pellets, and falling into a water bath.
The animation I saw might have been this one, or perhaps one that wasn’t so obviously-made-in-MS-Paint.

British Pathé‘s YouTube Channel provided me with this 1950 film, and if you follow only one hyperlink from this article, let it be this one! It’s a well-shot (pun intended, but there’s a worse pun in the video!), and while I needed to translate all of the references to “hundredweights” and “Fahrenheit” to measurements that I can actually understand, it’s thoroughly informative.

But there’s a problem with that video: it’s been badly cut from whatever reel it was originally found on, and from about 1 minute and 38 seconds in it switches to what is clearly a very different film! A mother is seen shepherding her young daughter off to bed, and a voiceover says:

Bedtime has a habit of coming round regularly every night. But for all good parents responsibility doesn’t end there. It’s just the beginning of an evening vigil, ears attuned to cries and moans and things that go bump in the night. But there’s no reason why those ears shouldn’t be your neighbours ears, on occasion.

Black & white framegrab showing a woman following her child, wearing pyjamas, towards a staircase up.
“Off to bed, you little monster. And no watching TikTok when you should be trying to sleep!”

Now my interest’s piqued. What was this short film going to be about, and where could I find it? There’s no obvious link; YouTube doesn’t even make it easy to find the video uploaded “next” by a given channel. I manipulated some search filters on British Pathé’s site until I eventually hit upon the right combination of magic words and found a clip called Radio Baby Sitter. It starts off exactly where the misplaced prior clip cut out, and tells the story of “Mr. and Mrs. David Hurst, Green Lane, Coventry”, who put a microphone by their daughter’s bed and ran a wire through the wall to their neighbours’ radio’s speaker so they can babysit without coming over for the whole evening.

It’s a baby monitor, although not strictly a radio one as the title implies (it uses a signal wire!), nor is it groundbreakingly innovative: the first baby monitor predates it by over a decade, and it actually did use radiowaves! Still, it’s a fun watch, complete with its contemporary fashion, technology, and social structures. Here’s the full thing, re-merged for your convenience:

Wait, what was I trying to do when I started, again? What was I even talking about…

It’s harder than it used to be

It used to be easier than this to get lost on the Web, and sometimes I miss that.

Obviously if you go back far enough this is true. Back when search engines were much weaker and Internet content was much less homogeneous and more distributed, we used to engage in this kind of meandering walk all the time: we called it “surfing” the Web. Second-generation Web browsers even had names, pretty often, evocative of this kind of experience: Mosaic, WebExplorer, Navigator, Internet Explorer, IBrowse. As people started to engage in the noble pursuit of creating content for the Web they cross-linked their sources, their friends, their affiliations (remember webrings? here’s a reminder; they’re not quite as dead as you think!), your favourite sites etc. You’d follow links to other pages, then follow their links to others still, and so on in that fashion. If you went round the circles enough times you’d start seeing all those invariably-blue hyperlinks turn purple and know you’d found your way home.

Screenshot showing Netscape Communicator running on Windows 98, showing Dan's vanity page circa 1999.
Some parts of the Web are perhaps best forgotten, though?

But even after that era, as search engines started to become a reliable and powerful way to navigate the wealth of content on the growing Web, links still dominated our exploration. Following a link from a resource that was linked to by somebody you know carried the weight of a “web of trust”, and you’d quickly come to learn whose links were consistently valuable and on what subjects. They also provided a sense of community and interconnectivity that paralleled the organic, chaotic networks of acquaintances people form out in the real world.

In recent times, that interpersonal connectivity has, for many, been filled by social networks (let’s ignore their failings in this regard for now). But linking to resources “outside” of the big social media silos is hard. These advertisement-funded services work hard to discourage or monetise activity that takes you off their platform, even at the expense of their users. Instagram limits the number of external links by profile; many social networks push for resharing of summaries of content or embedding content from other sources, discouraging engagement with the wider Web,  Facebook and Twitter both run external links through a linkwrapper (which sometimes breaks); most large social networks make linking to the profiles of other users of the same social network much easier than to users anywhere else; and so on.

The net result is that Internet users use fewer different websites today than they did 20 years ago, and spend most of their “Web” time in app versions of websites (which often provide a better experience only because site owners strategically make it so to increase their lock-in and data harvesting potential). Truly exploring the Web now requires extra effort, like exercising an underused muscle. And if you begin and end your Web experience on just one to three services, that just feels kind of… sad, to me. Wasted potential.

A woman reading a map. Photo by Leah Kelly.
I suppose nowadays we don’t get lost as often outside of the Internet, either. Photo by Leah Kelly.

It sounds like I’m being nostalgic for a less-sophisticated time on the Web (that would certainly be in character!). A time before we’d fully-refined the technology that would come to connect us in an instant to the answers we wanted. But that’s not exactly what I’m pining for. Instead, what I miss is something we lost along the way, on that journey: a Web that was more fun-and-weird, more interpersonal, more diverse. More Geocities, less Facebook; there’s a surprising thing to find myself saying.

Somewhere along the way, we ended up with the Web we asked for, but it wasn’t the Web we wanted.

× × × × ×

Ireland and the UK Aren’t In The Same Timezone!

This weekend, while investigating a bug in some code that generates iCalendar (ICS) feeds, I learned about a weird quirk in the Republic of Ireland’s timezone. It’s such a strange thing (and has so little impact on everyday life) that I imagine that even most Irish people don’t even know about it, but it’s important enough that it can easily introduce bugs into the way that computer calendars communicate:

Most of Europe put their clocks forward in Summer, but the Republic of Ireland instead put their clocks backward in Winter.

If that sounds to you like the same thing said two different ways – or the set-up to a joke! – read on:

Map showing timezones of Europe. The UK and Ireland are grouped (along with Iceland) in a zone labelled as being UTC+0.
The timezones of Europe look pretty simple compared to some parts of the world, but the illustration of the British Isles hides an interesting eccentricity.

A Brief History of Time (in Ireland)

Poster titled "Time (Ireland) Act 1916", advising that "On and after Sunday 1st October 1916 Western European Time will be ovserved throughout Ireland" asking people to set their clocks and watches back 35 minutes.
Spring forward, fall back… just a little bit back, though. Not too much.

After high-speed (rail) travel made mean solar timekeeping problematic, Great Britain in 1880 standardised on Greenwich Mean Time (UTC+0) as the time throughout the island, and Ireland standardised on Dublin Mean Time (UTC-00:25:21). If you took a ferry from Liverpool to Dublin towards the end of the 19th century you’d have to put your watch back by about 25 minutes. With air travel not yet being a thing, countries didn’t yet feel the need to fixate on nice round offsets in the region of one-hour (today, only a handful of regions retain UTC-offsets of half or quarter hours).

That’s all fine in peacetime, but by the First World War and especially following the Easter Rising, the British government decided that it was getting too tricky for their telegraph operators (many of whom operated out of Ireland, which provided an important junction for transatlantic traffic) to be on a different time to London.

1885 GPO telegraph instrument from the Porthcurno Telegraph Museum, which Dan almost visited the other week but it was closed.
It’s widely believed that the world’s first “U UP? [STOP]” message never got a response as a direct result of Anglo-Irish timezone confusion.
So the Time (Ireland) Act 1916 was passed, putting Ireland on Greenwich Mean Time. Ireland put her clocks back by 35 minutes and synched-up with the rest of the British Isles. And from then on, everything was simple and because nothing ever went wrong in Ireland as a result of the way it was governed by by Britain, nobody ever had to think about the question of timezones on the island again.

Ah. Hmm.

December 1920 photograph showing St Patrick's Street, Cork, following the burning of the city by British forces.
“Those Irish people want to govern their own country, do they? After we so kindly shared our king with them? Right-ho: let’s set fire to their cities and see how they feel then.”

Following Irish independence, the keeping of time carried on in much the same way for a long while, which will doubtless have been convenient for families spread across the Northern Irish border. But then came the Second World War.

Summers in the 1940s saw Churchill introduce Double Summer Time which he believed would give the UK more daylight, saving energy that might otherwise be used for lighting and increasing production of war materiel.

Ireland considered using the emergency powers they’d put in place to do the same, as a fuel saving measure… but ultimately didn’t. This was possibly because aligning her time with Britain might be seen as undermining her neutrality, but was more likely because the government saw that such a measure wouldn’t actually have much impact on fuel use (it certainly didn’t in Britain). Whatever the reason, though, Britain and Northern Ireland were again out-of-sync with one another until the war ended.

Newspaper clipping advising that "Double Summer Time comes to an end on Saturday night, August 8-9, when all clocks and watches should be put back one hour, thus reverting to British Summer Time, which will probably be maintained throughout the winter."
I like to imagine that the development of powerful computers by the folks at Bletchley Park was a result of needing to keep track of timezones across the British Isles.

From 1968 to 1971 Britain experimented with “British Standard Time” – putting the clocks forward in Summer once, to UTC+1, and then leaving them there for three years. This worked pretty well except if you were Scottish in which case you’ll have found winter mornings to be even gloomier than you were used to, which was already pretty gloomy. Conveniently: during much of this period Ireland was also on UTC+1, but in their case it was part of a different experiment. Ireland were working on joining the European Economic Community, and aligning themselves with “Paris time” year-round was an unnecessary concession but an interesting idea.

But here’s where the quirk appears: the Standard Time Act 1968, which made UTC+1 the “standard” timezone for the Republic of Ireland, was not repealed and is still in effect. Ireland could have started over in 1971 with a new rule that made UTC+0 the standard and added a “Summer Time” alternative during which the clocks are put forward… but instead the Standard Time (Amendment) Act 1971 left UTC+1 as Ireland’s standard timezone and added a “Winter Time” alternative during which the clocks are put back.

Two clocks, both showing the same time. One has a sign reading "LONDON", the other "DUBLIN, I GUESS?"
It all seems so simple until you actually think about it.

(For a deeper look at the legal history of time in the UK and Ireland, see this timeline. Certainly don’t get all your history lessons from me.)

So what?

You might rightly be thinking: so what! Having a standard time of UTC+0 and going forward for the Summer (like the UK), is functionally-equivalent to having a standard time of UTC+1 and going backwards in the Winter, like Ireland, right? It’s certainly true that, at any given moment, a clock in London and a clock in Dublin should show the same time. So why would anybody care?

Perl Data::ICal::TimeZone implementation of Dublin timezone, incorrectly showing summer DST at +1 rather than winter DST of -1.
This code for Europe/Dublin, from the Perl module Data::ICal::TimeZone, is technically-incorrect because it states that the winter time is the standard and daylight savings of +1 hour apply in the summer, rather than the opposite.

But declaring which is “standard” is important when you’re dealing with computers. If, for example, you run a volunteer rota management system that supports a helpline charity that has branches in both the UK and Ireland, then it might really matter that the computer systems involved know what each other mean when they talk about specific times.

The author of an iCalendar file can choose to embed timezone information to explain what, in that file, a particular timezone means. That timezone information might say, for example, “When I say ‘Europe/Dublin’, I mean UTC+1, or UTC+0 in the winter.” Or it might say – like the code above! – “When I say ‘Europe/Dublin’, I mean UTC+0, or UTC+1 in the summer.” Both of these declarations would be technically-valid and could be made to work, although only the first one would be strictly correct in accordance with the law.

Stressed programmer hunched over a MacBook. Photo by Anna Shvets from Pexels.
Clients who need solid timezone support represent 50% of a programmer’s production of stress hormones. See also Falsehoods Programmers Believe About Time.

But if you don’t include timezone information in your iCalendar file, you’re relying  on the feed subscriber’s computer (e.g. their calendar software) to make a sensible interpretation.. And that’s where you run into trouble. Because in cases like Ireland, for which the standard is one thing but is commonly-understood to be something different, there’s a real risk that the way your system interprets and encodes time won’t necessarily be the same as the way somebody else’s does.

If I say I’ll meet you at 12:00 on 1 January, in Ireland, you rightly need to know whether I’m talking about 12:00 in Irish “standard” time (i.e. 11:00, because daylight savings are in effect) or 12:00 in local-time-at-the-time-of-the-meeting (i.e. 12:00). Humans usually mean the latter because we think in terms of local time, but when your international computer system needs to make sure that people are on a shift at the same time, but in different timezones, it needs to be very clear what exactly it means!

And when your daylight savings works “backwards” compared to everybody else’s… that’s sure to make a developer somewhere cry. And, possibly, blog about your weird legislation.

× × × × × × × ×

The Ballad of John Crawford

Following the success of our last game of Dialect the previous month and once again in a one-week hiatus of our usual Friday Dungeons & Dragons game, I hosted a second remote game of this strange “soft” RPG with linguistics and improv drama elements.

Thieves’ Cant

Our backdrop to this story was Portsmouth in 1834, where we were part of a group – the Gunwharf Ants – who worked as stevedores and made our living (on top of the abysmal wages for manual handling) through the criminal pursuit of “skimming a little off the top” of the bulk-break cargo we moved between ships and onto and off the canal. These stolen goods would be hidden in the basement of nearby pub The Duke of Wellington until they could be safely fenced, and this often-lucrative enterprise made us the envy of many of the docklands’ other criminal gangs.

I played Katie – “Kegs” to her friends – the proprietor of the Duke (since her husband’s death) and matriarch of the group. I was joined by Nuek (Alec), a Scandinavian friend with a wealth of criminal experience, John “Tuck” Crawford (Matt), adoptee of the gang and our aspiring quartermaster, and “Yellow” Mathias Hammond (Simon), a navy deserter who consistently delivers better than he expects to.

Thieves' Cant tableau at the end of a game of Dialect, with cards strewn around the table.
Our second tableau was somehow more-chaotic than the first, even after I accidentally removed several cards before taking this picture!

While each of us had our stories and some beautiful and hilarious moments, I felt that we all quickly converged on the idea that the principal storyline in our isolation was that of young Tuck. The first act was dominated by his efforts to proof himself to the gang, and – with a little snuff – shake off his reputation as the “kid” of the group and gain acceptance amongst his peers. His chance to prove himself with a caper aboard the Queen Anne went proper merry though after she turned up tin-ful and he found himself kept in a second-place position for years longer. Tuck – and Yellow – got proofed eventually, but the extra time spent living hand-to-mouth might have been what first planted the seed of charity in the young man’s head, and kept most of his numbers out of his pocket and into those of the families he supported in the St. Stevens area.

The second act turned political, as Spiky Dave, leader of the competing gang The Barbados Boys, based over Gosport way, offered a truce between the two rivals in exchange for sharing the manpower – and profits – of a big job against a ship from South Africa… with a case of diamonds aboard. Disagreements over the deal undermined Kegs’ authority over the Ants, but despite their March it went ahead anyway and the job was a success. Except… Spiky Dave kept more than his share of the loot, and agreed to share what was promised only in exchange for the surrender of the Ants and their territory to his gang’s rulership.

We returned to interpersonal drama in the third act as Katie – tired of the gang wars and feeling her age – took perhaps more than her fair share of the barrel (the gang’s shared social care fund) and bought herself clearance to leave aboard a ship to a beachside retirement in Jamaica. She gave up her stake in the future of the gang and shrugged off their challenges in exchange for a quiet life, leaving Nuek as the senior remaining leader of the group… but Tuck the owner of the Duke of Wellington. The gang split into those that integrated with their rivals and those that went their separate ways… and their curious pidgin dissolved with them. Well, except for a few terms which hung on in dockside gang chatter, screeched amongst the gulls of Portsmouth without knowing their significance, for years to come.

Crop from Fine View of 1798 The Gunwharf Portsmouth Dockyard by E G Burrows

Playing Out

Despite being fundamentally the same game and a similar setting to when we played The Outpost the previous month, this game felt very different. Dialect is versatile enough that it can be used to write… adventures, coming-of-age tales, rags-to-riches stories, a comedies, horror, romance… and unless the tone is explicitly set out at the start then it’ll (hopefully) settle somewhere mutually-acceptable to all of the players. But with a new game, new setting, and new players, it’s inevitable that a different kind of story will be told.

But more than that, the backdrop itself impacted on the tale we wove. On Mars, we were physically isolated from the rest of humankind and living in an environment in which the necessities of a new lifestyle and society necessitates new language. But the isolation of criminal gangs in Portsmouth docklands in the late Georgian era is a very different kind: it’s a partial isolation, imposed (where it is) by its members and to a lesser extent by the society around them. Which meant that while their language was still a defining aspect of their isolation, it also felt more-artificial; deliberately so, because those who developed it did so specifically in order to communicate surreptitiously… and, we discovered, to encode their group’s identity into their pidgin.

Prison Hulks in Portsmouth Harbour by Ambrose-Louis Garneray

While our first game of Dialect felt like the language lead the story, this second game felt more like the language and the story co-evolved but were mostly unrelated. That’s not necessarily a problem, and I think we all had fun, but it wasn’t what we expected. I’m glad this wasn’t our first experience of Dialect, because if it were I think it might have tainted our understanding of what the game can be.

As with The Outpost, we found that some of the concepts we came up with didn’t see much use: on Mars, the concept of fibs was rooted in a history of of how our medical records were linked to one another (for e.g. transplant compatibility), but aside from our shared understanding of the background of the word this storyline didn’t really come up. Similarly, in Thieves Cant’ we developed a background about the (vegan!) roots of our gang’s ethics, but it barely got used as more than conversational flavour. In both cases I’ve wondered, after the fact, whether a “flashback” scene framed from one of our prompts might have helped solidify the concept. But I’m also not sure whether or not such a thing would be necessary. We seemed to collectively latch onto a story hook – this time around, centred around Matt’s character John Crawford’s life and our influences on it – and it played out fine.

And hey; nobody died before the epilogue, this time!

I’m looking forward to another game next time we’re on a D&D break, or perhaps some other time.

× × ×

The Devil’s Quoits

I’ve been doing a course provided through work to try to improve my ability to connect with an audience over video. For one of my assignments in this, my fourth week, I picked a topic out from the “welcome” survey I filled out when I first started the course. The topic: the Devil’s Quoits. This stone circle – not far from my new house – has such a bizarre history of construction, demolition, and reconstruction… as well as a fun folk myth about its creation… that I’d thought it’d make a great follow-up to my previous “local history” piece, Oxford’s Long-Lost Zoo. I’d already hidden a “virtual” geocache at the henge, as I previously did for the zoo: a video seemed like the next logical step.

My brief required that the video be only about a minute long, which presented its own challenge in cutting down the story I’d like to tell to a bare minimum. Then on top of that, it took me at least eight takes until I was confident that I’d have one I was happy with, and there’s still things I’d do differently if I did it again (including a better windbreak on my lapel mic, and timing my takes for when geese weren’t honking their way past overhead!).

In any case: part of the ritual of this particular course encourages you to “make videos… as if people will see them”, and I’ve been taking that seriously! Firstly, I’ve been sharing many of my videos with others either at work or on my blog, like the one about how GPS works or the one about the secret of magic. Secondly, I’ve been doing “extra credit” by recording many of my daily-standup messages as videos, in addition to providing them through our usual Slack bot.

Anyway, the short of it is: you’re among the folks who get to see this one. Also available on YouTube.

<blink> and <marquee>

I was chatting with a fellow web developer recently and made a joke about the HTML <blink> and <marquee> tags, only to discover that he had no idea what I was talking about. They’re a part of web history that’s fallen off the radar and younger developers are unlikely to have ever come across them. But for a little while, back in the 90s, they were a big deal.

Macromedia Dreamweaver 3 code editor window showing a <h2> heading wrapped in <marquee> and <blink> tags, for emphasis.
Even Macromedia Dreamweaver, which embodied the essence of 1990s web design, seemed to treat wrapping <blink> in <marquee> as an antipattern.

Invention of the <blink> element is often credited to Lou Montulli, who wrote pioneering web browser Lynx before being joining Netscape in 1994. He insists that he didn’t write any of the code that eventually became the first implementation of <blink>. Instead, he claims: while out at a bar (on the evening he’d first meet his wife!), he pointed out that many of the fancy new stylistic elements the other Netscape engineers were proposing wouldn’t work in Lynx, which is a text-only browser. The fanciest conceivable effect that would work across both browsers would be making the text flash on and off, he joked. Then another engineer – who he doesn’t identify – pulled a late night hack session and added it.

And so it was that when Netscape Navigator 2.0 was released in 1995 it added support for the <blink> tag. Also animated GIFs and the first inklings of JavaScript, which collectively would go on to define the “personal website” experience for years to come. Here’s how you’d use it:

<BLINK>This is my blinking text!</BLINK>

With no attributes, it was clear from the outset that this tag was supposed to be a joke. By the time HTML4 was published as a a recommendation two years later, it was documented as being a joke. But the Web of the late 1990s saw it used a lot. If you wanted somebody to notice the “latest updates” section on your personal home page, you’d wrap a <blink> tag around the title (or, if you were a sadist, the entire block).

Cameron's World website, screenshot, showing GIFS and bright pallette
If you missed this particular chapter of the Web’s history, you can simulate it at Cameron’s World.

In the same year as Netscape Navigator 2.0 was released, Microsoft released Internet Explorer 2.0. At this point, Internet Explorer was still very-much playing catch-up with the features the Netscape team had implemented, but clearly some senior Microsoft engineer took a look at the <blink> tag, refused to play along with the joke, but had an innovation of their own: the <marquee> tag! It had a whole suite of attributes to control the scroll direction, speed, and whether it looped or bounced backwards and forwards. While <blink> encouraged disgusting and inaccessible design as a joke, <marquee> did it on purpose.

<MARQUEE>Oh my god this still works in most modern browsers!</MARQUEE>

Oh my god this still works in most modern browsers!

If you see the text above moving… you’re looking at a living fossil in browser history.

But here’s the interesting bit: for a while in the late 1990s, it became a somewhat common practice to wrap content that you wanted to emphasise with animation in both a <blink> and a <marquee> tag. That way, the Netscape users would see it flash, the IE users would see it scroll or bounce. Like this:

<MARQUEE><BLINK>This is my really important message!</BLINK></MARQUEE>
Internet Explorer 5 showing a marquee effect.
Wrap a <blink> inside a <marquee> and IE users will see the marquee. Delightful.

The web has always been built on Postel’s Law: a web browser should assume that it won’t understand everything it reads, but it should provide a best-effort rendering for the benefit of its user anyway. Ever wondered why the modern <video> element is a block rather than a self-closing tag? It’s so you can embed within it code that an earlier browser – one that doesn’t understand <video> – can read (a browser’s default state when seeing a new element it doesn’t understand is to ignore it and carry on). So embedding a <blink> in a <marquee> gave you the best of both worlds, right? (welll…)

Netscape Navigator 5 showing a blink effect.
Wrap a <blink> inside a <marquee> and Netscape users will see the blink. Joy.

Better yet, you were safe in the knowledge that anybody using a browser that didn’t understand either of these tags could still read your content. Used properly, the web is about progressive enhancement. Implement for everybody, enhance for those who support the shiny features. JavaScript and CSS can be applied with the same rules, and doing so pays dividends in maintainability and accessibility (though, sadly, that doesn’t stop people writing sites that needlessly require these technologies).

Opera 5 showing no blinking nor marquee text.
Personally, I was a (paying! – back when people used to pay for web browsers!) Opera user so I mostly saw neither <blink> nor <marquee> elements. I don’t feel like I missed out.

I remember, though, the first time I tried Netscape 7, in 2002. Netscape 7 and its close descendent are, as far as I can tell, the only web browsers to support both <blink> and <marquee>. Even then, it was picky about the order in which they were presented and the elements wrapped-within them. But support was good enough that some people’s personal web pages suddenly began to exhibit the most ugly effect imaginable: the combination of both scrolling and flashing text.

Netscape 7 showing text that both blinks and marquee-scrolls.
If Netscape 7’s UI didn’t already make your eyes bleed (I’ve toned it down here by installing the “classic skin”), its simultaneous rendering of <blink> and <marquee> would.

The <blink> tag is very-definitely dead (hurrah!), but you can bring it back with pure CSS if you must. <marquee>, amazingly, still survives, not only in polyfills but natively, as you might be able to see above. However, if you’re in any doubt as to whether or not you should use it: you shouldn’t. If you’re looking for digital nostalgia, there’s a whole rabbit hole to dive down, but you don’t need to inflict <marquee> on the rest of us.

× × × × ×

A Trip Through New York City in 1911

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

With help from a neural network, Denis takes original cinematography of New York City in 1911 and uploads it as an cleaned, upscaled, high-framerate, colourised YouTube video. It’s pretty remarkable: compare it to the source video to see how much of a difference it makes: side-by-side, the smoothness of the frame rate alone is remarkable. It’s a shame that nothing can be done about the underexposed bits of the film where contrast detail is lacking: I wonder if additional analysis of the original print itself might be able to extract some extra information from these areas and them improve them using the same kinds of techniques.

In any event, a really interesting window-to-history!

Note #16758

Today my #distributed #remotework office is provided by @OCFI_OI, which provides me that most #Oxford of views: simultaneously containing architecture of the 1960s… and the 1160s.

View of 1960s buildings in front of Oxford Castle.

×

Where’s My Elephant?

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

The “where’s my elephant?” theory takes it name, of course, from The Simpsons episode in which Bart gets an elephant (Season 5, episode 17, to be precise). For those of you who don’t know the episode: Bart wins a radio contest where you have to answer a phone call with the phrase, “KBBL is going to give me something stupid.” That “something stupid” turns out to be either $10,000, or “the gag prize”: a full-grown African elephant. Much to the presenters’ surprise, Bart chooses the elephant — which is a problem for the radio station, since they don’t actually have an elephant to give him. After some attempts at negotiation (the presenters offer Principal Skinner $10,000 to go about with his pants pulled down for the rest of the school year; the presenters offer to use the $10,000 to turn Skinner into “some sort of lobster-like creature”), Bart finds himself kicked out of the radio station, screaming “where’s my elephant?”

…the “where’s my elephant?” theory holds the following:

  1. If you give someone a joke option, they will take it.
  2. The joke option is a (usually) a joke option for a reason, and choosing it will cause everyone a lot of problems.
  3. In time, the joke will stop being funny, and people will just sort of lose interest in it.
  4. No one ever learns anything.

For those that were surprised when Trump was elected or Brexit passed a referendum, the “Where’s My Elephant?” theory of history may provide some solace. With reference to Boaty McBoatface and to the assassination of Qasem Soleimani, Tom Whyman pitches that “joke” options will be selected significantly more-often that you’d expect or that they should.

Our society is like Bart Simpson. But can we be a better Bart Simpson?

If that didn’t cheer you up: here’s another article, which more-seriously looks at the political long-game that Remainers in Britain might consider working towards.

It’s 2020 and you’re in the future

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

West Germany’s 1974 World Cup victory happened closer to the first World Cup in 1930 than to today.

The Wonder Years aired from 1988 and 1993 and depicted the years between 1968 and 1973. When I watched the show, it felt like it was set in a time long ago. If a new Wonder Years premiered today, it would cover the years between 2000 and 2005.

Also, remember when Jurassic Park, The Lion King, and Forrest Gump came out in theaters? Closer to the moon landing than today.

These things come around now and again, but I’m not sure of the universal validity of observing that a memorable event is now closer to another memorable event than it is to the present day. I don’t think that the relevance of events is as linear as that. Instead, perhaps, it looks something like this:

Graph showing that recent events matter a lot, but rapidly tail off for a while before levelling out again as they become long-ago events.
Recent events matter more than ancient events to the popular consciousness, all other things being equal, but relative to one another the ancient ones are less-relevant and there’s a steep drop-off somewhere between the two.

Where the drop-off in relevance occurs is hard to pinpoint and it probably varies a lot by the type of event that’s being remembered: nobody seems to care about what damn terrible thing Trump did last month or the month before when there’s some new terrible thing he did just this morning, for example (I haven’t looked at the news yet this morning, but honestly whenever you read this post he’ll probably have done something awful).

Nonetheless, this post on Wait But Why was a fun distraction, even if it’s been done before. Maybe the last time it happened was so long ago it’s irrelevant now?

XKCD 1393: Timeghost - 'Hello, Ghostbusters?' 'ooOOoooo people born years after that movie came out are having a second chiiiild right now ooOoooOoo'
Of course, there’s a relevant XKCD. And it was published closer to the theatrical releases of Cloudy with a Chance of Meatballs and Paranormal Activity than it was to today. OoooOOoooOOoh.
×

The Legend of the Homicidal Fire-Proof Salamander

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

In the first century AD, Roman naturalist Pliny the Elder threw a salamander into a fire. He wanted to see if it could indeed not only survive the flames, but extinguish them, as Aristotle had claimed such creatures could. But the salamander didn’t … uh … make it.

Yet that didn’t stop the legend of the fire-proof salamander (a name derived from the Persian meaning “fire within”) from persisting for 1,500 more years, from the Ancient Romans to the Middle Ages on up to the alchemists of the Renaissance. Some even believed it was born in fire, like the legendary Phoenix, only slimier and a bit less dramatic. And that its fur (huh?) could be used to weave fire-resistant garments.

Back when the world felt bigger and more-mysterious it was easier for people to come to the conclusion, based on half-understood stories passed-on many times, that creatures like unicorns, dragons, and whatever the Vegetable Lamb of Tartary was supposed to be, might exist just beyond the horizons. Nature was full of mystery and the simple answer – that salamanders might live in logs and then run to escape when those logs are thrown onto a fire – was far less-appealing than the idea that they might be born from the fire itself! Let’s not forget that well into the Middle Ages it was widely believed that many forms of life appeared not through reproduction but by spontaneous generation: clams forming themselves out of sand, maggots out of meat, and so on… with this underlying philosophy, it’s easy to make the leap that sure, amphibians from fire makes sense too, right?

Perhaps my favourite example of such things is the barnacle goose, which – prior to the realisation that birds migrate and coupled with them never being seen to nest in England – lead to the widespread belief that they spontaneously developed (at the appropriate point in the season) from shellfish… this may be the root of the word “barnacle” as used to describe the filter-feeders with which we’re familiar. So prevalent was this belief that well into the 15th century (and in some parts of the world the late 18th century) this particular species of goose was treated as being a fish, not a bird, for the purpose of Christian fast-days.

Anyway; that diversion aside, this article’s an interesting look at the history of mythological beliefs about salamanders.

Evolving Computer Words: “Hacker”

This is part of a series of posts on computer terminology whose popular meaning – determined by surveying my friends – has significantly diverged from its original/technical one. Read more evolving words…

Anticipatory note: based on the traffic I already get to my blog and the keywords people search for, I imagine that some people will end up here looking to learn “how to become a hacker”. If that’s your goal, you’re probably already asking the wrong question, but I direct you to Eric S. Raymond’s Guide/FAQ on the subject. Good luck.

Few words have seen such mutation of meaning over their lifetimes as the word “silly”. The earliest references, found in Old English, Proto-Germanic, and Old Norse and presumably having an original root even earlier, meant “happy”. By the end of the 12th century it meant “pious”; by the end of the 13th, “pitiable” or “weak”; only by the late 16th coming to mean “foolish”; its evolution continues in the present day.

Right, stop that! It's too silly.
The Monty Python crew were certainly the experts on the contemporary use of the word.

But there’s little so silly as the media-driven evolution of the word “hacker” into something that’s at least a little offensive those of us who probably would be described as hackers. Let’s take a look.

Hacker

What people think it means

Computer criminal with access to either knowledge or tools which are (or should be) illegal.

What it originally meant

Expert, creative computer programmer; often politically inclined towards information transparency, egalitarianism, anti-authoritarianism, anarchy, and/or decentralisation of power.

The Past

The earliest recorded uses of the word “hack” had a meaning that is unchanged to this day: to chop or cut, as you might describe hacking down an unruly bramble. There are clear links between this and the contemporary definition, “to plod away at a repetitive task”. However, it’s less certain how the word came to be associated with the meaning it would come to take on in the computer labs of 1960s university campuses (the earliest references seem to come from around April 1955).

There, the word hacker came to describe computer experts who were developing a culture of:

  • sharing computer resources and code (even to the extent, in extreme cases, breaking into systems to establish more equal opportunity of access),
  • learning everything possible about humankind’s new digital frontiers (hacking to learn, not learning to hack)
  • judging others only by their contributions and not by their claims or credentials, and
  • discovering and advancing the limits of computers: it’s been said that the difference between a non-hacker and a hacker is that a non-hacker asks of a new gadget “what does it do?”, while a hacker asks “what can I make it do?”
Venn-Euler-style diagram showing crackers as a subset of security hackers, who in turn are a subset of hackers. Script kiddies are a group of their own, off to the side where nobody has to talk to them (this is probably for the best).
What the media generally refers to as “hackers” would be more-accurately, within the hacker community, be called crackers; a subset of security hackers, in turn a subset of hackers as a whole. Script kiddies – people who use hacking tools exclusively for mischief without fully understanding what they’re doing – are a separate subset on their own.

It is absolutely possible for hacking, then, to involve no lawbreaking whatsoever. Plenty of hacking involves writing (and sharing) code, reverse-engineering technology and systems you own or to which you have legitimate access, and pushing the boundaries of what’s possible in terms of software, art, and human-computer interaction. Even among hackers with a specific interest in computer security, there’s plenty of scope for the legal pursuit of their interests: penetration testing, security research, defensive security, auditing, vulnerability assessment, developer education… (I didn’t say cyberwarfare because 90% of its application is of questionable legality, but it is of course a big growth area.)

Getty Images search for "Hacker".
Hackers have a serious image problem, and the best way to see it is to search on your favourite stock photo site for “hacker”. If you don’t use a laptop in a darkened room, wearing a hoodie and optionally mask and gloves, you’re not a real hacker. Also, 50% of all text should be green, 40% blue, 10% red.

So what changed? Hackers got famous, and not for the best reasons. A big tipping point came in the early 1980s when hacking group The 414s broke into a number of high-profile computer systems, mostly by using the default password which had never been changed. The six teenagers responsible were arrested by the FBI but few were charged, and those that were were charged only with minor offences. This was at least in part because there weren’t yet solid laws under which to prosecute them but also because they were cooperative, apologetic, and for the most part hadn’t caused any real harm. Mostly they’d just been curious about what they could get access to, and were interested in exploring the systems to which they’d logged-in, and seeing how long they could remain there undetected. These remain common motivations for many hackers to this day.

"Hacker" Dan Q
Hoodie: check. Face-concealing mask: check. Green/blue code: check. Is I a l33t hacker yet?

News media though – after being excited by “hacker” ideas introduced by WarGames – rightly realised that a hacker with the same elementary resources as these teens but with malicious intent could cause significant real-world damage. Bruce Schneier argued last year that the danger of this may be higher today than ever before. The press ran news stories strongly associating the word “hacker” specifically with the focus on the illegal activities in which some hackers engage. The release of Neuromancer the following year, coupled with an increasing awareness of and organisation by hacker groups and a number of arrests on both sides of the Atlantic only fuelled things further. By the end of the decade it was essentially impossible for a layperson to see the word “hacker” in anything other than a negative light. Counter-arguments like The Conscience of a Hacker (Hacker’s Manifesto) didn’t reach remotely the same audiences: and even if they had, the points they made remain hard to sympathise with for those outside of hacker communities.

"Glider" Hacker Emblem
‘Nuff said.

A lack of understanding about what hackers did and what motivated them made them seem mysterious and otherworldly. People came to make the same assumptions about hackers that they do about magicians – that their abilities are the result of being privy to tightly-guarded knowledge rather than years of practice – and this elevated them to a mythical level of threat. By the time that Kevin Mitnick was jailed in the mid-1990s, prosecutors were able to successfully persuade a judge that this “most dangerous hacker in the world” must be kept in solitary confinement and with no access to telephones to ensure that he couldn’t, for example, “start a nuclear war by whistling into a pay phone”. Yes, really.

Four hands on one keyboard, from CSI: Cyber
Whistling into a phone to start a nuclear war? That makes CSI: Cyber seem realistic [watch].

The Future

Every decade’s hackers have debated whether or not the next decade’s have correctly interpreted their idea of “hacker ethics”. For me, Steven Levy’s tenets encompass them best:

  1. Access to computers – and anything which might teach you something about the way the world works – should be unlimited and total.
  2. All information should be free.
  3. Mistrust authority – promote decentralization.
  4. Hackers should be judged by their hacking, not bogus criteria such as degrees, age, race, or position.
  5. You can create art and beauty on a computer.
  6. Computers can change your life for the better.

Given these concepts as representative of hacker ethics, I’m convinced that hacking remains alive and well today. Hackers continue to be responsible for many of the coolest and most-important innovations in computing, and are likely to continue to do so. Unlike many other sciences, where progress over the ages has gradually pushed innovators away from backrooms and garages and into labs to take advantage of increasingly-precise generations of equipment, the tools of computer science are increasingly available to individuals. More than ever before, bedroom-based hackers are able to get started on their journey with nothing more than a basic laptop or desktop computer and a stack of freely-available open-source software and documentation. That progress may be threatened by the growth in popularity of easy-to-use (but highly locked-down) tablets and smartphones, but the barrier to entry is still low enough that most people can pass it, and the new generation of ultra-lightweight computers like the Raspberry Pi are doing their part to inspire the next generation of hackers, too.

That said, and as much as I personally love and identify with the term “hacker”, the hacker community has never been less in-need of this overarching label. The diverse variety of types of technologist nowadays coupled with the infiltration of pop culture by geek culture has inevitably diluted only to be replaced with a multitude of others each describing a narrow but understandable part of the hacker mindset. You can describe yourself today as a coder, gamer, maker, biohacker, upcycler, cracker, blogger, reverse-engineer, social engineer, unconferencer, or one of dozens of other terms that more-specifically ties you to your community. You’ll be understood and you’ll be elegantly sidestepping the implications of criminality associated with the word “hacker”.

The original meaning of “hacker” has also been soiled from within its community: its biggest and perhaps most-famous advocate‘s insistence upon linguistic prescriptivism came under fire just this year after he pushed for a dogmatic interpretation of the term “sexual assault” in spite of a victim’s experience. This seems to be absolutely representative of his general attitudes towards sex, consent, women, and appropriate professional relationships. Perhaps distancing ourselves from the old definition of the word “hacker” can go hand-in-hand with distancing ourselves from some of the toxicity in the field of computer science?

(I’m aware that I linked at the top of this blog post to the venerable but also-problematic Eric S. Raymond; if anybody can suggest an equivalent resource by another author I’d love to swap out the link.)

Verdict: The word “hacker” has become so broad in scope that we’ll never be able to rein it back in. It’s tainted by its associations with both criminality, on one side, and unpleasant individuals on the other, and it’s time to accept that the popular contemporary meaning has won. Let’s find new words to define ourselves, instead.

× × × ×

Evolving Computer Words: “GIF”

This is part of a series of posts on computer terminology whose popular meaning – determined by surveying my friends – has significantly diverged from its original/technical one. Read more evolving words…

The language we use is always changing, like how the word “cute” was originally a truncation of the word “acute”, which you’d use to describe somebody who was sharp-witted, as in “don’t get cute with me”. Nowadays, we use it when describing adorable things, like the subject of this GIF:

[Animated GIF] Puppy flumps onto a human.
Cute, but not acute.
But hang on a minute: that’s another word that’s changed meaning: GIF. Want to see how?

GIF

What people think it means

File format (or the files themselves) designed for animations and transparency. Or: any animation without sound.

What it originally meant

File format designed for efficient colour images. Animation was secondary; transparency was an afterthought.

The Past

Back in the 1980s cyberspace was in its infancy. Sir Tim hadn’t yet dreamed up the Web, and the Internet wasn’t something that most people could connect to, and bulletin board systems (BBSes) – dial-up services, often local or regional, sometimes connected to one another in one of a variety of ways – dominated the scene. Larger services like CompuServe acted a little like huge BBSes but with dial-up nodes in multiple countries, helping to bridge the international gaps and provide a lower learning curve than the smaller boards (albeit for a hefty monthly fee in addition to the costs of the calls). These services would later go on to double as, and eventually become exclusively, Internet Service Providers, but for the time being they were a force unto themselves.

CompuServe ad circa 1983
My favourite bit of this 1983 magazine ad for CompuServe is the fact that it claims a trademark on the word “email”. They didn’t try very hard to cling on to that claim, unlike their controversial patent on the GIF format…

In 1987, CompuServe were about to start rolling out colour graphics as a new feature, but needed a new graphics format to support that. Their engineer Steve Wilhite had the idea for a bitmap image format backed by LZW compression and called it GIF, for Graphics Interchange Format. Each image could be composed of multiple frames each having up to 256 distinct colours (hence the common mistaken belief that a GIF can only have 256 colours). The nature of the palette system and compression algorithm made GIF a particularly efficient format for (still) images with solid contiguous blocks of colour, like logos and diagrams, but generally underperformed against cosine-transfer-based algorithms like JPEG/JFIF for images with gradients (like most photos).

GIF with more than 256 colours.
This animated GIF (of course) shows how it’s possible to have more than 256 colours in a GIF by separating it into multiple non-temporal frames.

GIF would go on to become most famous for two things, neither of which it was capable of upon its initial release: binary transparency (having “see through” bits, which made it an excellent choice for use on Web pages with background images or non-static background colours; these would become popular in the mid-1990s) and animation. Animation involves adding a series of frames which overlay one another in sequence: extensions to the format in 1989 allowed the creator to specify the duration of each frame, making the feature useful (prior to this, they would be displayed as fast as they could be downloaded and interpreted!). In 1995, Netscape added a custom extension to GIF to allow them to loop (either a specified number of times or indefinitely) and this proved so popular that virtually all other software followed suit, but it’s worth noting that “looping” GIFs have never been part of the official standard!

Hex editor view of a GIF file's metadata section, showing Netscape headers.
Open almost any animated GIF file in a hex editor and you’ll see the word NETSCAPE2.0; evidence of Netscape’s role in making animated GIFs what they are today.

Compatibility was an issue. For a period during the mid-nineties it was quite possible that among the visitors to your website there would be a mixture of:

  1. people who wouldn’t see your GIFs at all, owing to browser, bandwidth, preference, or accessibility limitations,
  2. people who would only see the first frame of your animated GIFs, because their browser didn’t support animation,
  3. people who would see your animation play once, because their browser didn’t support looping, and
  4. people who would see your GIFs as you intended, fully looping

This made it hard to depend upon GIFs without carefully considering their use. But people still did, and they just stuck a Netscape Now button on to warn people, as if that made up for it. All of this has happened before, etc.

In any case: as better, newer standards like PNG came to dominate the Web’s need for lossless static (optionally transparent) image transmission, the only thing GIFs remained good for was animation. Standards like APNG/MNG failed to get off the ground, and so GIFs remained the dominant animated-image standard. As Internet connections became faster and faster in the 2000s, they experienced a resurgence in popularity. The Web didn’t yet have the <video> element and so embedding videos on pages required a mixture of at least two of <object>, <embed>, Flash, and black magic… but animated GIFs just worked and soon appeared everywhere.

Magic.
How animation online really works.

The Future

Nowadays, when people talk about GIFs, they often don’t actually mean GIFs! If you see a GIF on Giphy or WhatsApp, you’re probably actually seeing an MPEG-4 video file with no audio track! Now that Web video is widely-supported, service providers know that they can save on bandwidth by delivering you actual videos even when you expect a GIF. More than ever before, GIF has become a byword for short, often-looping Internet animations without sound… even though that’s got little to do with the underlying file format that the name implies.

What's a web page? Something ducks walk on?
What’s a web page? What’s anything?

Verdict: We still can’t agree on whether to pronounce it with a soft-G (“jif”), as Wilhite intended, or with a hard-G, as any sane person would, but it seems that GIFs are here to stay in name even if not in form. And that’s okay. I guess.

× × × × × ×

Evolving Computer Words: “Broadband”

This is part of a series of posts on computer terminology whose popular meaning – determined by surveying my friends – has significantly diverged from its original/technical one. Read more evolving words…

Until the 17th century, to “fathom” something was to embrace it. Nowadays, it’s more likely to refer to your understanding of something in depth. The migration came via the similarly-named imperial unit of measurement, which was originally defined as the span of a man’s outstretched arms, so you can understand how we got from one to the other. But you know what I can’t fathom? Broadband.

Woman hugging a dalmatian. Photo by Daria Shevtsova from Pexels.
I can’t fathom dalmatians. But this woman can.

Broadband Internet access has become almost ubiquitous over the last decade and a half, but ask people to define “broadband” and they have a very specific idea about what it means. It’s not the technical definition, and this re-invention of the word can cause problems.

Broadband

What people think it means

High-speed, always-on Internet access.

What it originally meant

Communications channel capable of multiple different traffic types simultaneously.

The Past

Throughout the 19th century, optical (semaphore) telegraph networks gave way to the new-fangled electrical telegraph, which not only worked regardless of the weather but resulted in significantly faster transmission. “Faster” here means two distinct things: latency – how long it takes a message to reach its destination, and bandwidth – how much information can be transmitted at once. If you’re having difficulty understanding the difference, consider this: a man on a horse might be faster than a telegraph if the size of the message is big enough because a backpack full of scrolls has greater bandwidth than a Morse code pedal, but the latency of an electrical wire beats land transport every time. Or as Andrew S. Tanenbaum famously put it: Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.

Telephone-to-heliograph conversion circa 1912.
There were transitional periods. This man, photographed in 1912, is relaying a telephone message into a heliograph. I’m not sure what message he’s transmitting, but I’m guessing it ends with a hashtag.

Telegraph companies were keen to be able to increase their bandwidth – that is, to get more messages on the wire – and this was achieved by multiplexing. The simplest approach, time-division multiplexing, involves messages (or parts of messages) “taking turns”, and doesn’t actually increase bandwidth at all: although it does improve the perception of speed by giving recipients the start of their messages early on. A variety of other multiplexing techniques were (and continue to be) explored, but the one that’s most-interesting to us right now was called acoustic telegraphy: today, we’d call it frequency-division multiplexing.

What if, asked folks-you’ll-have-heard-of like Thomas Edison and Alexander Graham Bell, we were to send telegraph messages down the line at different frequencies. Some beeps and bips would be high tones, and some would be low tones, and a machine at the receiving end could separate them out again (so long as you chose your frequencies carefully, to avoid harmonic distortion). As might be clear from the names I dropped earlier, this approach – sending sound down a telegraph wire – ultimately led to the invention of the telephone. Hurrah, I’m sure they all immediately called one another to say, our efforts to create a higher-bandwidth medium for telegrams has accidentally resulted in a lower-bandwidth (but more-convenient!) way for people to communicate. Job’s a good ‘un.

Electro-acoustic telegraph "tuning fork".
If this part of Edison’s 1878 patent looks like a tuning fork, that’s not a coincidence. These early multiplexers made distinct humming sounds as they operated, owing to the movement of the synchronised forks within.

Most electronic communications systems that have ever existed have been narrowband: they’ve been capable of only a single kind of transmission at a time. Even if you’re multiplexing a dozen different frequencies to carry a dozen different telegraph messages at once, you’re still only transmitting telegraph messages. For the most part, that’s fine: we’re pretty clever and we can find workarounds when we need them. For example, when we started wanting to be able to send data to one another (because computers are cool now) over telephone wires (which are conveniently everywhere), we did so by teaching our computers to make sounds and understand one another’s sounds. If you’re old enough to have heard a fax machine call a landline or, better yet used a dial-up modem, you know what I’m talking about.

As the Internet became more and more critical to business and home life, and the limitations (of bandwidth and convenience) of dial-up access became increasingly questionable, a better solution was needed. Bringing broadband to Internet access was necessary, but the technologies involved weren’t revolutionary: they were just the result of the application of a little imagination.

Dawson can't use the Internet because someone's on the phone.
I’ve felt your pain, Dawson. I’ve felt your pain.

We’d seen this kind of imagination before. Consider teletext, for example (for those of you too young to remember teletext, it was a standard for browsing pages of text and simple graphics using an 70s-90s analogue television), which is – strictly speaking – a broadband technology. Teletext works by embedding pages of digital data, encoded in an analogue stream, in the otherwise-“wasted” space in-between frames of broadcast video. When you told your television to show you a particular page, either by entering its three-digit number or by following one of four colour-coded hyperlinks, your television would wait until the page you were looking for came around again in the broadcast stream, decode it, and show it to you.

Teletext was, fundamentally, broadband. In addition to carrying television pictures and audio, the same radio wave was being used to transmit text: not pictures of text, but encoded characters. Analogue subtitles (which used basically the same technology): also broadband. Broadband doesn’t have to mean “Internet access”, and indeed for much of its history, it hasn’t.

Ceefax news article from 29 October 1988, about a cancelled Soviet shuttle launch.
My family started getting our news via broadband in about 1985. Not broadband Internet, but broadband nonetheless.

Here in the UK, ISDN (from 1988!) and later ADSL would be the first widespread technologies to provide broadband data connections over the copper wires simultaneously used to carry telephone calls. ADSL does this in basically the same way as Edison and Bell’s acoustic telegraphy: a portion of the available frequencies (usually the first 4MHz) is reserved for telephone calls, followed by a no-mans-land band, followed by two frequency bands of different sizes (hence the asymmetry: the A in ADSL) for up- and downstream data. This, at last, allowed true “broadband Internet”.

But was it fast? Well, relative to dial-up, certainly… but the essential nature of broadband technologies is that they share the bandwidth with other services. A connection that doesn’t have to share will always have more bandwidth, all other things being equal! Leased lines, despite technically being a narrowband technology, necessarily outperform broadband connections having the same total bandwidth because they don’t have to share it with other services. And don’t forget that not all speed is created equal: satellite Internet access is a narrowband technology with excellent bandwidth… but sometimes-problematic latency issues!

ADSL microfilter
Did you have one of these tucked behind your naughties router? This box filtered out the data from the telephone frequencies, helping to ensure that you can neither hear the pops and clicks of your ADSL connection nor interfere with it by shouting.

Equating the word “broadband” with speed is based on a consumer-centric misunderstanding about what broadband is, because it’s necessarily true that if your home “broadband” weren’t configured to be able to support old-fashioned telephone calls, it’d be (a) (slightly) faster, and (b) not-broadband.

The Future

But does the word that people use to refer to their high-speed Internet connection matter. More than you’d think: various countries around the world have begun to make legal definitions of the word “broadband” based not on the technical meaning but on the populist one, and it’s becoming a source of friction. In the USA, the FCC variously defines broadband as having a minimum download speed of 10Mbps or 25Mbps, among other characteristics (they seem to use the former when protecting consumer rights and the latter when reporting on penetration, and you can read into that what you will). In the UK, Ofcom‘s regulations differentiate between “decent” (yes, that’s really the word they use) and “superfast” broadband at 10Mbps and 24Mbps download speeds, respectively, while the Scottish and Welsh governments as well as the EU say it must be 30Mbps to be “superfast broadband”.

Faster, Harder, Scooter music video.
At full-tilt, going from 10Mbps to 24Mbps means taking only 4 seconds, rather than 11 seconds, to download the music video to Faster! Harder! Scooter!

I’m all in favour of regulation that protects consumers and makes it easier for them to compare products. It’s a little messy that definitions vary so widely on what different speeds mean, but that’s not the biggest problem. I don’t even mind that these agencies have all given themselves very little breathing room for the future: where do you go after “superfast”? Ultrafast (actually, that’s exactly where we go)? Megafast? Ludicrous speed?

What I mind is the redefining of a useful term to differentiate whether a connection is shared with other services or not to be tied to a completely independent characteristic of that connection. It’d have been simple for the FCC, for example, to have defined e.g. “full-speed broadband” as providing a particular bandwidth.

Verdict: It’s not a big deal; I should just chill out. I’m probably going to have to throw in the towel anyway on this one and join the masses in calling all high-speed Internet connections “broadband” and not using that word for all slower and non-Internet connections, regardless of how they’re set up.

× × × × ×