Debugging WorldWideWeb

Earlier this week, I mentioned the exciting hackathon that produced a moderately-faithful reimagining of the world’s first Web browser. I was sufficiently excited about it that I not only blogged here but I also posted about it to MetaFilter. Of course, the very first thing that everybody there did was try to load MetaFilter in it, which… didn’t work.

MetaFilter failing to load on the reimagined WorldWideWeb.
500? Really?

People were quick to point this out and assume that it was something to do with the modernity of MetaFilter:

honestly, the disheartening thing is that many metafilter pages don’t seem to work. Oh, the modern web.

Some even went so far as to speculate that the reason related to MetaFilter’s use of CSS and JS:

CSS and JS. They do things. Important things.

This is, of course, complete baloney, and it’s easy to prove to oneself. Firstly, simply using the View Source tool in your browser on a MetaFilter page reveals source code that’s quite comprehensible, even human-readable, without going anywhere near any CSS or JavaScript.

MetaFilter in Lynx: perfectly usable browing experience
As late as the early 2000s I’d occasionally use Lynx for serious browsing, but any time I’ve used it since it’s been by necessity.

Secondly, it’s pretty simple to try browsing MetaFilter without CSS or JavaScript enabled! I tried in two ways: first, by using Lynx, a text-based browser that’s never supported either of those technologies. I also tried by using Firefox but with them disabled (honestly, I slightly miss when the Web used to look like this):

MetaFilter in Firefox (with CSS and JS disabled)
It only took me three clicks to disable stylesheets and JavaScript in my copy of Firefox… but I’ll be the first to admit that I don’t keep my browser configured like “normal people” probably do.

And thirdly: the error code being returned by the simulated WorldWideWeb browser is a HTTP code 500. Even if you don’t know your HTTP codes (I mean, what kind of weirdo would take the time to memorise them all anyway <ahem>), it’s worth learning this: the first digit of a HTTP response code tells you what happened:

  • 1xx means “everything’s fine, keep going”;
  • 2xx means “everything’s fine and we’re done”;
  • 3xx means “try over there”;
  • 4xx means “you did something wrong” (the infamous 404, for example, means you asked for a page that doesn’t exist);
  • 5xx means “the server did something wrong”.

Simple! The fact that the error code begins with a 5 strongly implies that the problem isn’t in the (client-side) reimplementation of WorldWideWeb: if this had have been a CSS/JS problem, I’d expect to see a blank page, scrambled content, “filler” content, or incomplete content.

So I found myself wondering what the real problem was. This is, of course, where my geek flag becomes most-visible: what we’re talking about, let’s not forget, is a fringe problem in an incomplete simulation of an ancient computer program that nobody uses. Odds are incredibly good that nobody on Earth cares about this except, right now, for me.

Dan's proposed "Geek Flag"
I searched for a “Geek Flag” and didn’t like anything I saw, so I came up with this one based on… well, if you recognise what it’s based on, good for you, you’re certainly allowed to fly it. If not… well, you can too: there’s no geek-gatekeeping here.

Luckily, I spotted Jeremy’s note that the source code for the WorldWideWeb simulator was now available, so I downloaded a copy to take a look. Here’s what’s happening:

  1. The (simulated) copy of WorldWideWeb is asked to open a document by reference, e.g. “https://www.metafilter.com/”.
  2. To work around same-origin policy restrictions, the request is sent to an API which acts as a proxy server.
  3. The API makes a request using the Node package “request” with this line of code: request(url, (error, response, body) => { ... }).  When the first parameter to request is a (string) URL, the module uses its default settings for all of the other options, which means that it doesn’t set the User-Agent header (an optional part of a Web request where the computer making the request identifies the software that’s asking).
  4. MetaFilter, for some reason, blocks requests whose User-Agent isn’t set. This is weird! And nonstandard: while web browsers should – in RFC2119 terms – set their User-Agent: header, web servers shouldn’t require that they do so. MetaFilter returns a 403 and a message to say “Forbidden”; usually a message you only see if you’re trying to access a resource that requires session authentication and you haven’t logged-in yet.
  5. The API is programmed to handle response codes 200 (okay!) and 404 (not found), but if it gets anything else back it’s supposed to throw a 400 (bad request). Except there’s a bug: when trying to throw a 400, it requires that an error message has been set by the request module and if there hasn’t… it instead throws a 500 with the message “Internal Server Fangle” and  no clue what actually went wrong. So MetaFilter’s 403 gets translated by the proxy into a 400 which it fails to render because a 403 doesn’t actually produce an error message and so it gets translated again into the 500 that you eventually see. What a knock-on effect!
Illustration showing conversation between simulated WorldWideWeb and MetaFilter via an API that ultimately sends requests without a User-Agent, gets a 403 in response, and can't handle the 403 and so returns a confusing 500.
If you’re having difficulty visualising the process, this diagram might help you to continue your struggle with that visualisation.

The fix is simple: simply change the line:

request(url, (error, response, body) => { ... })

to:

request({ url: url, headers: { 'User-Agent': 'WorldWideWeb' } }, (error, response, body) => { ... })

This then sets a User-Agent header and makes servers that require one, such as MetaFilter, respond appropriately. I don’t know whether WorldWideWeb originally set a User-Agent header (CERN’s source file archive seems to be missing the relevant C sources so I can’t check) but I suspect that it did, so this change actually improves the fidelity of the emulation as a bonus. A better fix would also add support for and appropriate handling of other HTTP response codes, but that’s a story for another day, I guess.

I know the hackathon’s over, but I wonder if they’re taking pull requests…

× × × × ×

WorldWideWeb, 30 years on

This month, a collection of some of my favourite geeks got invited to CERN in Geneva to participate in a week-long hackathon with the aim of reimplementing WorldWideWeb – the first web browser, circa 1990-1994 – as a web application. I’m super jealous, but I’m also really pleased with what they managed to produce.

DanQ.me as displayed by the reimagined WorldWideWeb browser circa 1990
With the exception of a few character entity quirks, this site remains perfectly usable in the simulated WorldWideWeb browser. Clearly I wasn’t the only person to try this vanity-check…

This represents a huge leap forward from their last similar project, which aimed to recreate the line mode browser: the first web browser that didn’t require a NeXT computer to run it and so a leap forward in mainstream appeal. In some ways, you might expect reimplementing WorldWideWeb to be easier, because its functionality is more-similar that of a modern browser, but there were doubtless some challenges too: this early browser predated the concept of the DOM and so there are distinct processing differences that must be considered to get a truly authentic experience.

Geeks hacking on WorldWideWeb reborn
It’s just like any other hackathon, if you ignore the enormous particle collider underneath it.

Among their outputs, the team also produced a cool timeline of the Web, which – thanks to some careful authorship – is as legible in WorldWideWeb as it is in a modern browser (if, admittedly, a little less pretty).

WorldWideWeb screenshot by Sir Tim Berners-Lee
When Sir Tim took this screenshot, he could never have predicted the way the Web would change, technically, over the next 25-30 years. But I’m almost more-interested in how it’s stayed the same.

In an age of increasing Single Page Applications and API-driven sites and “apps”, it’s nice to be reminded that if you develop right for the Web, your content will be visible (sort-of; I’m aware that there are some liberties taken here in memory and processing limitations, protocols and negotiation) on machines 30 years old, and that gives me hope that adherence to the same solid standards gives us a chance of writing pages today that look just as good in 30 years to come. Compare that to a proprietary technology like Flash whose heyday 15 years ago is overshadowed by its imminent death (not to mention Java applets or ActiveX <shudders>), iOS apps which stopped working when the operating system went 64-bit, and websites which only work in specific browsers (traditionally Internet Explorer, though as I’ve complained before we’re getting more and more Chrome-only sites).

The Web is a success story in open standards, natural and by-design progressive enhancement, and the future-proof archivability of human-readable code. Long live the Web.

Update 24 February 2019: After I submitted news of the browser to MetaFilter, I (and others) spotted a bug. So I came up with a fix…

× × ×

You probably don’t need a single-page application

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

The meteoric rise of front-end frameworks like React, Angular, Vue.js, Elm, etc. has made single-page applications ubiquitous on the web. For many developers, these have become part of their ‘default’ toolset. When they start a new project, they grab the tools they know already: a REST API on the backend, and a React/Angular/Vue/Elm frontend.

Is there something wrong with these tools? Absolutely not. In fact, I love working with them. However, I would only choose this architecture when an actual requirement is pushing me in that direction. If there are no specific reasons to build a single-page application, I will go with a traditional server-rendered architecture every day of the week. It is simpler and allows you to move faster.

There’s been an increasing trend towards delivering web applications as SPAs backed by an API. I can see the attraction: disposing of the browser’s navigation cycle lets you develop that coveted “app-like” interaction experience, pushing only data around lets you implement multiple clients backed by the same single middleware, and it results in a development workflow that fits tightly with many of the hippest frameworks (go jamstack, backendless, Node-backed, or whatever). I love REST and all, but I feel that it works best when it’s used to deliver multiformat results (whether by content negotiation or whatever): web pages for the humans, JSON or whatever for the computers.

For an increasing number of developers, SPAs are a golden hammer. Let’s fix that.

Post-it Note Affirmations and the Amazon Dash

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

The amazon dash is a pinnacle of modern web design. It’s one of the most intrusive, complex, and resource-dependent devices we’ve introduced into our homes, yet it appears as a simple oval with a single button for a single use. The use is absurdly narrow: the button will have a picture of Tide detergent, and when you press the button, Tide detergent is sent to your door.

Barely a week goes by between the times that I discover some horrifically over-engineered “solution” on the Internet. Amazon’s Dash buttons are terrible: disposable (plastic) single-purpose computers that could so easily have been made into something “more” – more-versatile, more-open, more-configurable, more-flexible. Indeed: people have been doing exactly that kind of thing! But the vanilla Dash button remains little more than selling you convenience (and not much convenience, if we’re honest) in exchange for more and more of your feeling of digital freedom. Yet another example of what replaced the Web we lost…

By hiding the technical processes, and simplifying the onboarding and engagement of their services, Amazon can continually reinforce your depression for a profit— and you can get name-brand laundry detergent faster.

Also, can I just take a moment to point out how awesome Zach’s website is. Not only is it the perfect example of how fun and weird the Internet can be and having a mixture of fascinating and curious content, it’s also available via dat:// for those of you who’ve got some love for the datbaseiverse.

Shouldn’t We All Have Seamless Micropayments By Now?

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Shouldn’t We All Have Seamless Micropayments By Now? (WIRED)

The web’s founders fully expected some form of digital payment to be integral to its functioning. But nearly three decades later, we’re still waiting.

Back in the 1990s, when Tim Berners-Lee and his team were creating the infrastructure of the World Wide Web, they made a list of the error codes that would pop up when something went wrong. You’ve surely encountered many of them: “404 Not Found,” which pops up if you click on a dead link; “401 Unauthorized” when you hit a page that needs a password; and so on.

Here’s one you probably haven’t seen—and its absence from your life speaks to why the promise of the early web seems increasingly out of reach: “402 Payment Required.”

That’s right: The web’s founders fully expected some form of digital payment to be integral to its functioning, just as integral as links, web pages, and passwords. After all, without a way to quickly and smoothly exchange money, how would a new economy be able to flourish online? Of course there ought to be a way to integrate digital cash into browsing and other activities. Of course.

Yet after almost three decades, that 402 error code is still “reserved for future use.” So I still have to ask: Where are my digital micropayments? Where are those frictionless, integrated ways of exchanging money online—cryptographically protected to allow commerce but not surveillance?

In response to this article being discussed on MetaFilter, I wrote:

The Web Payments Working Group published a specification for a standardised mechanism for the collection of card payment details online, a couple of years ago. It’s not quite the same thing because it’s done in the page application rather than at the HTTP(S) level, but it goes a long way towards solving a lot of the problems with our existing approach to payment processing online.

It’s already seeing adoption in browsers, but merchants and payment processors are unlikely to start rolling it out until adoption until later because (a) they want critical mass and (b) they’re wary of change. But within a few years, you’ll probably see it for the first time, and you might not even notice.

The idea is that instead of asking you to fill out an (arbitrary) form, a web page will ask your browser to get payment details from you in a standardised format. That might mean entering your card details if that’s how you prefer to work (but even if you choose to do this, the form you fill in will look the same every time) but it would instead allow you to use a payment tool built in to your browser, operating system, or password safe to do it for you. I know that browsers and password safes will offer to try to do this today, but standardising the format means that they’ll always be able to achieve it.

Once this technology’s in place, there’s nothing to stop HTTP 402’s implementation being completed: all the infrastructure will exist.

The thing about the future is that when it arrives, you don’t even notice. It’s never jetpacks and flying cars: it’s a series of iterative changes, each one predictable after the completion of the last but the entire ensemble seeming innovative and surprising when taken as a whole.

Let’s bring Fan Sites and webrings back!

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Let’s bring Fan Sites and webrings back! – bryanlrobinson.com (bryanlrobinson.com)

In the days before the web was mainstream, it was a place of creation. First for education, then for every random idea that any creator had!

As the web transitioned from a network of educational institutions to the consumer force it is today, the early adopters were technologists… AKA geeks!

Promo image of various Fan Sites

A hallmark of geek culture is fandom – a deep knowledge of whatever topic interests them. This could be about a book, TV show, movie or band. With this passion comes a desire to share it with the world. Before the internet, there was no clear path. After the web started gaining traction, it was the biggest and easiest megaphone you could want.

It wasn’t always easy to be found, though. There was no search algorithm. Google was not ubiquitous with search. To be found, you needed to be listed on a site that aggregated other sites about your topic.

There was always a certain joy to a well-kept webring, back in the day. I’d love to see a return to this kind of “Indieweb dream”, but I don’t think that just wishing for it nor even telling people to go out and do it goes far enough, alone. Hopefully Bryan’s post will help nudge a few people in the right direction, though.

Browser diversity starts with us

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Jeffrey Zeldman (zeldman.com)

Even if you love Chrome, adore Gmail, and live in Google Docs or Analytics, no single company, let alone a user-tracking advertising giant, should control the internet.

Diversity is as good for the web as it is for society. And it starts with us.

Yet more fallout from the Microsoft announcement that Edge will switch to Chromium, which I discussed earlier. This one’s pretty inspirational, and gives a good reminder about what our responsibilities are to the Web, as its developers.

Goodbye, EdgeHTML

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Microsoft is officially giving up on an independent shared platform for the internet. By adopting Chromium, Microsoft hands over control of even more of online life to Google.

This may sound melodramatic, but it’s not. The “browser engines” — Chromium from Google and Gecko Quantum from Mozilla — are “inside baseball” pieces of software that actually determine a great deal of what each of us can do online. They determine core capabilities such as which content we as consumers can see, how secure we are when we watch content, and how much control we have over what websites and services can do to us. Microsoft’s decision gives Google more ability to single-handedly decide what possibilities are available to each one of us.

From a business point of view Microsoft’s decision may well make sense. Google is so close to almost complete control of the infrastructure of our online lives that it may not be profitable to continue to fight this. The interests of Microsoft’s shareholders may well be served by giving up on the freedom and choice that the internet once offered us. Google is a fierce competitor with highly talented employees and a monopolistic hold on unique assets. Google’s dominance across search, advertising, smartphones, and data capture creates a vastly tilted playing field that works against the rest of us.

From a social, civic and individual empowerment perspective ceding control of fundamental online infrastructure to a single company is terrible. This is why Mozilla exists. We compete with Google not because it’s a good business opportunity. We compete with Google because the health of the internet and online life depend on competition and choice. They depend on consumers being able to decide we want something better and to take action.

Will Microsoft’s decision make it harder for Firefox to prosper? It could. Making Google more powerful is risky on many fronts. And a big part of the answer depends on what the web developers and businesses who create services and websites do. If one product like Chromium has enough market share, then it becomes easier for web developers and businesses to decide not to worry if their services and sites work with anything other than Chromium. That’s what happened when Microsoft had a monopoly on browsers in the early 2000s before Firefox was released. And it could happen again.

If you care about what’s happening with online life today, take another look at Firefox. It’s radically better than it was 18 months ago — Firefox once again holds its own when it comes to speed and performance. Try Firefox as your default browser for a week and then decide. Making Firefox stronger won’t solve all the problems of online life — browsers are only one part of the equation. But if you find Firefox is a good product for you, then your use makes Firefox stronger. Your use helps web developers and businesses think beyond Chrome. And this helps Firefox and Mozilla make overall life on the internet better — more choice, more security options, more competition.

Scathing but well-deserved dig at Microsoft by Mozilla, following on from the Edge-switch-to-Chromium I’ve been going on about. Chris is right: more people should try Firefox (it’s been my general-purpose browser on desktop and mobile ever since Opera threw in the towel and joined the Chromium hivemind in 2013, and on-and-off plenty before then) – not just because it’s a great browser (and it is!) but also now because it’s important for the diversity and health of the Web.

(Reprinted in full under a creative commons license.)

While we Blink, we loose [sic] the Web

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

We used to have much more diversity in terms of browser engines years ago than we do today. This is easy to understand as the Web in 2018 is far more complex than it was in the early noughties. It is very costly to develop and maintain a Web engine and few companies have the necessary talent and cash to do it. Microsoft is one of those companies but the fact that it might be throwing in the towel on its engine signals a bad development for all of us.

Further evaluation of the dangers of the disappearing diversity on the Web, following in the theme of my thoughts the other day about Microsoft’s adoption of Chromium instead of EdgeHTML in its browser.

Andre raises a real point: how will we fight for a private and decentralised Web when it becomes “the Google Web”?

Why You Should Never, Ever Use Quora

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Yesterday, Quora announced that 100 million user accounts were compromised, including private activity like downvotes and direct messages, by a “malicious third party.”

Data breaches are a frustrating part of the lifecycle of every online service — as they grow in popularity, they become a bigger and bigger target. Nearly every major online service has had a security breach: Facebook, Google, Twitter, Yahoo, Tumblr, Uber, Evernote, eBay, Adobe, Target, Twitter, and Sony all extensively leaked user data in the last few years.

Security breaches like these are a strong argument for using a password manager, but not a compelling reason to avoid a service you love, unless you plan to quit the internet entirely.

But this does seem like a good time to remind you of all the other reasons why you should never, ever use Quora.

Short summary of why you shouldn’t use Quora (even ignoring the recent security scare), for those who can’t be bothered clicking-through:

  • They claim to want to share knowledge, but they hoard and restrict access to knowledge
  • They’re actively hostile to the free exchange of data, both technically and politically
  • They directly oppose the archiving and backup of the knowledge they hoard
  • They won’t last (even remotely) forever

Just don’t use Quora.

Bodleian Advent Calendar

Hot on the tail of Pong, I wanted to share another mini-project I’ve developed for the Bodleian: this year’s digital advent calendar:

Bodleian 2018 digital advent calendar
If you look closely, you’ll find I’ve shown you a sneak-peek at some of what’s behind tomorrow’s door. Shh. Don’t tell our social media officer.

As each door is opened, a different part of a (distinctly-Bodleian/Oxford) winter scene unfolds, complete with an array of fascinating characters connected to the history, tradition, mythology and literature of the area. It’s pretty cool, and you should give it a go.

If you want to make one of your own – for next year, presumably, unless you’ve an inclination to count-down in this fashion to something else that you’re celebrating 25 days hence – I’ve shared a version of the code that you can adapt for yourself.

Sample advent calendar
The open-source version doesn’t include the beautiful picture that the Bodleian’s does, so you’ll have to supply your own.

Features that make this implementation a good starting point if you want to make your own digital advent calendar include:

  • Secure: your server’s clock dictates which doors are eligible to be opened, and only content legitimately visible on a given date can be obtained (no path-traversal, URL-guessing, or traffic inspection holes).
  • Responsive: calendar adapts all the way down to tiny mobiles and all the way up to 4K fullscreen along with optimised images for key resolutions.
  • Friendly: accepts clicks and touches, uses cookies to remember the current state so you don’t have to re-open doors you opened yesterday (unless you forgot to open one yesterday), “just works”.
  • Debuggable: a password-protected debug mode makes it easy for you to test, even on a production server, without exposing the secret messages behind each door.
  • Expandable: lots of scope for the future, e.g. a progressive web app version that you can keep “on you” and which notifies you when a new door’s ready to be opened, was one of the things I’d hoped to add in time for this year but didn’t quite get around to.

I’ve no idea if it’s any use to anybody, but it’s available on GitHub if you want it.

×

Warp and Weft?

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

Warp and Weft by Paul Robert Lloyd (paulrobertlloyd.com)

Earlier this month I had the good fortune to attend Material, a conference that explores the concept of the web as a material and all the intrinsic characteristics that entails. The variety of talks provided new perspectives on what it means to build for – and with – the web, and prompted me to …

What it means for something to be of the web has been discussed many times before. While the technical test can be reasonably objective – is it addressable, accessible and available – culturally it remains harder to judge. But I don’t know about you, I’ve found that certain websites feel more ‘webby’ than others.

Despite being nonspecific on the nature of the feeling he describes, Paul hits the nail on the head. Your favourite (non-Medium) blog or guru site almost certainly has that feel of being “of the web”. Your favourite API-less single-page app (with the growing “please use in Chrome” banner) almost certainly does not.

When to use CSS vs. JavaScript

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

CSS before JS #

My general rule of thumb is…

If something I want to do with JavaScript can be done with CSS instead, use CSS.

CSS parses and renders faster.

For things like animations, it more easily hooks into the browser’s refresh rate cycle to provide silky smooth animations (this can be done in JS, too, but CSS just makes it so damn easy).

And it fails gracefully.

A JavaScript error can bring all of the JS on a page to screeching halt. Mistype a CSS property or miss a semicolon? The browser just skips the property and moves on. Use an unsupported feature? Same thing.

This exactly! If you want progressive enhancement (and you should), performance, and the cleanest separation of behaviour and presentation, the pages you deliver to your users (regardless of what technology you use on your server) should consist of:

  • HTML, written in such a way that that they’re complete and comprehensible alone – from an information science perspective, your pages shouldn’t “need” any more than this (although it’s okay if they’re pretty ugly without any more)
  • CSS, adding design, theme, look-and-feel to your web page
  • Javascript, using progressive enhancement to add functionality in-the-browser (e.g. validation on the client-side in addition to the server side validation, for speed and ease of user experience) and, where absolutely necessary, to add functionality not possible any other way (e.g. if you’re looking to tap into the geolocation API, you’re going to need Javascript… but it’s still desirable to provide as much of the experience as possible without)

Developers failing to follow this principle is making the Web more fragile and harder to archive. It’s not hard to do things “right”: we just need to make sure that developers learn what “right” is and why it’s important.

Incidentally, I just some enhancements to the header of this site, including some CSS animations on the logo and menu (none of them necessary, but all useful) and some Javascript to help ensure that users of touch-capable devices have an easier time. Note that neither Javascript nor CSS are required to use this site; they just add value… just the way the Web ought to be (where possible).

Push without notifications

This article is a repost promoting content originally published elsewhere. See more things Dan's reposted.

On the first day of Indie Web Camp Berlin, I led a session on going offline with service workers. This covered all the usual use-cases: pre-caching; custom offline pages; saving pages for offline reading.

But on the second day, Sebastiaan spent a fair bit of time investigating a more complex use of service workers with the Push API.

While I’m very unwilling to grant permission to be interrupted by intrusive notifications, I’d be more than willing to grant permission to allow a website to silently cache timely content in the background. It would be a more calm technology.

Then when I’m on a plane, or in the subway, or in any other situation without a network connection, I could still visit these websites and get content that’s fresh to me. It’s kind of like background sync in reverse.

Yes, yes, yes.The Push API’s got incredible potential for precaching, or even re-caching existing content. How about if you could always instantly open my web site, whether you were on or off-line, and know that you’d always be able to read the front page and most-recent articles. You should be able to opt-in to “hot” push notifications if that’s what you really want, but there should be no requirement to do so.

By the time you’re using the Push API for things like this, why not go a step further? How about PWA feed readers or email clients that use web-pushes to keep your Inbox full? What about social network clients that always load instantly with the latest content? Or even analytics packages to push your latest stats to your device? Or turn-based online games that push the latest game state, ready for you to make your next move (which can be cached offline and pushed back when online)?

There are so many potential uses for “quiet” pushing, and now I’m itching for an opportunity to have a play with them.