A Note to Self on Churn
25 October 2021 | 7:00 pm

We shall not cease from [churn], and the end of all our [churning] will be to arrive where we started and know the place for the first time. — Me, quoting T.S. Elliot

Humans get good at what they practice. Doesn’t matter what it is. If you practice something over and over, consciously or not, you’ll get really good at it.

Working on the web, I feel like I’ve become really good at churn: anticipating, evaluating, and navigating constant change. How we build on the web is constantly changing — seemingly every week a new framework, technology, or methodology which claims to eek an improvement over whatever exists now — and there’s a feeling of “FOMO” at the least, “keep up or die” at the worst.

So I learn, and I churn. I don’t like the churn, but I’ve also become really good at it. And I like to do what I’m good at. The joker said, “if you’re good at something, never do it for free.”

I’m good at staying abreast of new technology. I’m good at Googling a problem and finding a solution. I’m good at hunting through community forums or channels to find the solution or workaround to an API change. I’m good at refactoring an old framework to a new one. I’m good at solving technical problems where I just want to make something work, like making the thing flash across the screen or making the code compile without errors.

But I’m not always good at pausing and asking why I want something to work, and then how I plan to make it work in response to that why.

I feel the pain of my current tool and reach for a new one that relieves those pains, only to realize afterwards that the strengths of my prior tool are now weaknesses of my current one. So I churn, switching from tool X to tool Y to tool Z, often to learn I could’ve lived with tradeoffs of any of them and been just fine.

But fixing an issue is not solving a problem (see my story about a new toilet).

These are all thoughts I’ve been reflecting on after documenting my notes from Rich Hickey’s talk. At one point, he states:

[We have a problem and somebody says] “I think we need a NoSQL database”. There’s something missing here. We haven’t actually said “why?” We haven’t stated: what are the characteristics of this problem that lead us to this solution? This is where all the interesting work is in software development.

I have seen so much software made where nobody ever wrote down the problem and all the sudden boom, we have a new system. Nobody ever stated what problem was being solved.

Perhaps churn can be the experiential stepping stones to understanding and wisdom? There’s a balance in there somewhere. For now, I’m going to try to be better at articulating a problem before jumping to solve (or at least get around) it. As Rich says, “the seed of solving a problem is stating it.”



Thoughts on Avoiding an Excessive DOM Size
19 October 2021 | 7:00 pm

I recently read Web Platform News #40 and saw this:

The Lighthouse tool for auditing web pages suggests avoiding an excessive DOM size…Lighthouse recommends keeping a page’s DOM size below 1,500 DOM elements.

I had never seen this before. I knew a large DOM could create performance problems, but I’d only encountered them when trying to script against a large DOM. In my own (anecdotal) experience, loading an HTML with lots of DOM nodes was never necessarily a performance hit.

I wanted more color around this recommendation, so I read through the page Lighthouse links to:

Lighthouse flags pages with DOM trees that:

Have more than 1,500 nodes total.
Have a depth greater than 32 nodes.
Have a parent node with more than 60 child nodes.

Interesting.

I wanted to see this in action for myself, so I pulled up a classic website that surely has lots of DOM nodes: Wikipedia. Specifically, I looked at the World Wide Web entry and found Lighthouse taking exception with the size of the DOM:

Screenshot of the World Wide Web entry on Wikipedia in Chrome with the Lighthouse dev tools open showing a warning about the size of the DOM.

At the time of this writing, the recommended limit for DOM elements is 1,500. This Wikipedia page came in at 4,606.

(How exactly does DOM size gets calculated? I’m not sure. I ran document.querySelectorAll("*").length in the console and got the number 4,641, which is pretty close to what Lighthouse reported but this method isn’t very scientific or reproducible across different web pages. Looking at the source code for Lighthouse you could probably derive how DOM size is calculated.)

What’s intriguing about this warning on DOM size is that it doesn’t appear (at least at the moment) to have any bearing on the performance score, as this Wikipedia page came in at 100 (for a “Desktop” performance audit).

Screenshot of the World Wide Web entry on Wikipedia in Chrome with the Lighthouse dev tools open showing a performance score of 100.

Out of curiosity, I wanted to try and find a Wikipedia page whose DOM size would be even larger, so I tried the entry for Human. It weighed in at 12,075 DOM elements.

What I found intriguing about the DOM size warning in Lighthouse was the callout to the DOM element with the most children (likely intended as a hint to an element in the DOM you should consider refactoring). In this case, the DOM element with the most children was the references!

Screenshot of the Wikipedia entry for ‘Human’ with the Lighthouse dev tools open showing a warning about the number of DOM elements.

Wouldn’t want to cite too many sources now would we?

I’m being flippant here, but only in part.

A guideline that a web page’s DOM should avoid being too large? I can buy that. The codification of that guideline into an industry standard tool? That causes me hesitation.

The fuzziness of human language allows us to recommend a “best practice” like “don’t let your DOM size get too big” while still providing room for nuance which may alter or even void the recommendation entirely.

However, codifying a guideline or best practice into an automated tool which can be measured requires lines be drawn. A metric, however arbitrary, must be chosen in order to make a measurement and provide a judgement.

In this particular case, it’s choosing a maximum number of DOM elements — 1,500 — to represent a threshold at which point your DOM has become too big. Why 1,500? I don’t know. But the binary nature of that number boggles my brain a bit. 1,499 DOM elements? Green check, you’re ok. 1,501 DOM elements? Red alert, you’re doing something wrong!

A closer look the official rationale behind avoiding a large DOM outlines three reasons why a large DOM can contribute to slower performance, two of which hinge on JavaScript running. No JavaScript? There’s only one (stated) reason to limit your DOM size: “network efficiency and load performance”:

A large DOM tree often includes many nodes that aren't visible when the user first loads the page, which unnecessarily increases data costs for your users and slows down load time.

Ok, I can agree with that in principle. Makes sense—the bigger anything is on a computer, the more compute resources will be required to handle it.

But in practice, it seems to me there’s more nuance to the question of DOM size than having 1,500 elements or less.

However nice as a guideline, I’m not sure I buy an arbitrary limit on DOM nodes codified into a performance tool everyone uses. It might seem harmless, but we are shaping our performance tool which will shape us and how we think about what the web is, how it should work, and what it ultimately should be. Limiting DOM size is a specific point of view about the nature of a web page and is therefore limiting, perhaps exclusionary, in its vision of what a web page can be.

As a simple illustration of what I’m trying to get at, consider the book Moby Dick. The fact that you can access that book digitally is quite astounding, a feat many would’ve marveled at thirty years ago. The entire book available at a URL as HTML. No need for an app that optimizes for performance by only allowing access to one chapter at a time while requiring a “save for offline reading” feature to download the entire book. Just fast, performant, searchable text delivered as HTML with no JavaScript — and no need to keep the DOM size small.

Screenshot of the Lighthouse dev tool flagging the large DOM size for ‘Moby Dick’ on Project Gutenburg.

The beauty of the web is that it’s bigger than any rules we try to draw around it. As Rich Harris stated in his talk on transitional apps, “[the web is] a medium that by its very nature resists definitional boundaries.”

Perhaps there’s more room for nuance and range in our performance tools and metrics.

Update 2021-10-19

@anthony_ricaud hit me up on Twitter noting that the “Human” page on Wikipedia pales in comparison to the DOM size and depth of the HTML spec. It’s a single page of HTML with 278,148 elements!



Notes: Hammock Driven Development by Rich Hickey
14 October 2021 | 7:00 pm

A classic. I’ve listened to it a few times, but never taken notes. I wanted to write down what stood out this time around.

Note to self: everything he covers about “the waking mind” is great, i.e. the idea of giving your mind problems to solve in the background.

You can watch the video or read the transcript.

On waterfall and design:

Most of the biggest problems in software are problems of misconception.

We don't have a good idea of what we're doing before we do it and then GO! GO! GO!

Testing and type systems are of limited use here.

There was lots of stuff that was terrible about waterfall. But it doesn’t mean the step before “go build it” isn’t a good step. Design is still important.

Also: type systems won’t let you know if you have a good idea.

On features vs. solving problems:

We should be solving problems, not building features. A feature is just an attribute of something. It’s the shiny chrome knob. It’s not the purpose of the car. There's no guarantee if you put together a feature list, even if it comes from the customer, that it’s gonna solve their own problem. Or that it will solve any problem. Or that the features, when you put them together, don’t introduce a bunch of other problems.

Avoiding problems != solving problems:

We have a tendency, because we're all smart and we love being smart and figuring out how to make things go, that [we think] figuring out how to make something go is good no matter what it took to do it. If we can figure out how to get around a problem, [we consider it a success]…Avoiding problems is not solving problems.

On tradeoffs::

Be discerning when you look at possibilities and what others do. There are tradeoffs in everything:

Usually when [people talk] about tradeoffs in their software, they’re talking about the parts of their software that suck. “Oh well, I had to make these tradeoffs...”

It’s really easy to get excited about the good parts of what you do. But you should be looking for tradeoffs. The chances of there being no tradeoffs in what you do are slim.

On growing:

I think that’s [so exciting]: no matter what I’ve ever thought of, I know I’m gonna think of something better…means that I'm still going. You will [always] think of better ideas.

If I can advocate anything: do not be afraid of being wrong.




More News from this Feed See Full Web Site