CSS ought to be bad, JS ought to be good

Wednesday, 2 July 2025

I want to try a make an argument against CSS and for JavaScript. The tricky part is that I wouldn’t accept the claim myself at face value, so I will have to qualify my conclusions at the end, and be careful to distinguish between the actual and ideal.


Inconsistency

The first point against CSS is that it helps smooth over the lack of expressive power in HTML. In retrospect it is easy to recognize that HTML is insufficient for the needs and intentions of real-world usage patterns. This desired path is clearly expressed in the conventions that web designers follow, designing web pages with headers, footers, drop-down menus, listings, etc. Nevertheless, all these constructs cannot be expressed just in HTML — or at the very least if a web developer uses <header> and <footer> tags, web browsers skim over these and do not integrate them into the visual presentation of a website.

And why should they, by the time HTML5 came around, it everyone knew that CSS is used to “declare” the look and feel of a website. So the best one can do is perhaps configure the user style sheet of your browser to treat these “semantic” headers and footers consistently — but even then, I don’t know of any mainstream browser that makes this an easy and reliable to use feature, let alone would encourage the user to do so.

The issue that CSS played here is to make it possible for each website to have an individual style. The potential to choose a specific font, colors, width, and height of parts of the page is of course nothing a corporate designer could pass on. The consequence is that website designs are inconsistent, as they have to be re-created from scratch again and again, where divergence is inevitable. (I have frequently made the remark elsewhere, that I clearly remember while teaching myself HTML being disappointed by the relative disuse of the <head> tag by web browsers. Most of these are for crawlers, instead of declaring the data that a web browser would use to synthesize the “frame” of a website).

Analogously, imagine if HTML wouldn’t implement hyperlinks, but there were different means of simulating links. In this “TML” it would be easy to see that the UI/UX of something as fundamental as a link would be impossible to handle implement consistently, which would reduce their usefulness and hence usage.

As a general principle, it should be easy to do the right thing. And as HTML doesn’t make it easy to produce a consistent structure of a website, they don’t end up being consistent.

Qualifications

Semantic Typesetting

One of the first things one notices when trying out an alternative render-engine, such as that of Netsurf or Ladybird, are the little things it does wrong. This is reminiscent of the concept of the [uncanny valley][]: The fact that most of it looks OK, but not everything, ends up being irritating in a different way than when a text-based browser that ignores CSS entirely just fails to understand how to display the contents of a website for them to make any sense.

While I can only assert this from the perspective of an observer, I would claim that a big part of what makes implementing an independent browser, and hence increases the reliance on organizations like Google, is that the output has to be pixel-perfect.

That CSS is capable of serving this need is seen by the proliferation of “CSS resets”, i.e. stylesheets that applies in any mainstream browser should ensure that the fine aspects of typesetting end up (almost) identical.

The fact that we can use CSS to declare these kinds of pixel-perfect declarations w.r.t. to the height, width, and font size of specific and individual elements. This fact makes it impossible for CSS to be sensibly specified “denotationaly”.

Furthermore, I would want to argue that it shouldn’t be possible for a website to dictate that a website becomes unreadable. Even if we imagine a restricted style sheet system that can only address semantic markup uniformly, it shouldn’t be possible to shrink the size of text to become ineligible, to adjust the colours of a website to have an unacceptable contrast or to hide parts of the website from the viewer (ad banners, cookie prompts or paywalls).

This point also relates to consistency, but doesn’t rely on having accepted that property as being desirable. Reducing the emergent complexity of CSS, be it by castration as argued here or by abolishment as argued previously, would both make web browsers less of an implicit hodge-podge of power.

Qualifications

Computational Potential of Documents

I strongly believe that as a society, we have by no means managed to harness the entire potential of computers. And I don’t even mean contemporary trends in AI. The potential automatisation of bureaucratic procedures or of negotiating timetables has at best archived a state of replicating analogue systems or simply accelerating these.

One exception that had managed to find widespread acceptance is the WWW. This constitutes a qualitative improvement upon previous systems, as discovery of referenced texts became substantially easier, and even raised the expectations of making human knowledge accessible and addressable by default.

A perpendicular extension of simple, plain text is something one might refer to as computational text1. I take this to be incarnated in many forms, from non-interactive TeX documents that can perform computation during compile-time to notebook software like Jupyter. In general, this means that the text you read can depend on computations that are presented in some document form (modifying the generated text or generating images). An example a mate of mine likes to give is that of recipes, that — within reasonable limits — could be adjusted to different measurements, depending on the number of people one is preparing a meal for. This might or might not include the ability to take user input in-document or by some other means.

The combination of the two, which is something that the WWW manages to provide, without this having been an intention. Online maps are one strikingly useful application, that integrate user-interaction with hypertext links.

Contrast this with the popular claim that JavaScript ought to be avoided — at least in some circles. This is due to the belief that HTML shouldn’t involve any dynamic content (on the client side). There are different gradations of this position, from rejecting any and all scripting, to progressive enhancement or expecting scripts to be free and “stable”, so as to make patching these possible. I will be focusing on the first position, that rejects JavaScript outright, often suggesting CSS as an alternative.

My first objection is that this assumes some kind of intentional and final design of HTML, that may only be amended in the same spirit. So the addition of <header> and other HTML 5 tags are acceptable, while qualitative changes like those JavaScript can provide are shunned. I counter this by emphasizing that HTML has many accidental aspects and evaluations. Take for instance the thread on the W3C mailing list that ended up in the addition of images to HTML. Arguably, HTML would have gained some support for graphics in one form or another eventually, but this illustrates that even something so fundamental by our current standards was up to debate.

A different example, and one I sadly cannot substantiate with a source, comes from private conversations with people involved in the standardization processes, involves the claim that the WWW should have ended up much more “Wiki-like” than it ended up doing, mainly due to difficulties arising from the conservative and non-cooperative development of Microsoft’s Internet Explorer. I would like to find out more about this at some point, and if you know anything about this (or if this is a misrepresentation), please messages me! Assuming that something along these lines is true, it suggests to me that the development of the WWW was prematurely halted and instead of developing on it stagnated at some compromise that appears in retrospect to be guided by the virtue of simplicity.

For this reason I reject the claim that one ought to reject JavaScript, as the intention is to change the WWW into something it was not intended to be used as. Instead I would like to see that computerized text-documents can make use of the entire potential of computers, which includes interactivity, the ability to adjust the manner in which it is presented. The WWW shouldn’t consider text-browsers and printed documents when considering a least common denominator.

Qualifications


The picture I have painted here is counterfactual and mostly non-consecutive. I don’t envision anything I have said to contribute to coming developments of the WWW as we know it. Instead, I hope that this is an interesting story and an exercise in distinguishing the inherent and accidental aspects of our technical world. I hope you found my argument worthwhile. If you have any further qualifications or refutations to add to my presentation, feel free to contact me and I can either add them inline or link to them on some other site.


  1. Perhaps there is an established name for this, but I don’t know of it↩︎