Or: Some Counterfactual Thoughts on Practical Computing
Sunday, 15 December 2019
I hold free software dear to my heart. In my eyes, it’s what separates tools from services. Free Software help me get that tad more of independence in a quite complex digital world.
But at the same time, I’m painfully aware that (formal) software freedom cannot suffice on it’s own. For example: If the hardware or even all non-user interface’able software is undermining the control I would like to have of the device sitting in front of me, in my hands or in by computer room, then all the castrated freedom that is left seems more like a joke than a reasonable compromise.
Most people are conscious of this, for example when it comes to smart phones. Their difficult to produce nature, their various constraints, from size, weight, materials to energy efficiency, connectivity, user-input etc., necessitate a greater initial effort. The “simplicity” of a regular keyboard with so and so many discrete signals it can generate is nothing compared to the complexity of a touchscreen.
For this reason, the producers of software are capable, perhaps even motivated to act counter to the users interests. Whether it be constraining access to the installed software, let alone hardware or the decision to bundle malware. And that’s not to speak of the structural disadvantage the other has towards telecommunication providers.
Yet we1 either accept our situation – trying to ignore them – or reject smartphones entirely, as they lacks the practical requirements to respect user freedom.
Recently I have been wondering: What if these same problems and considerations should be raised when it comes to personal computers2. What if the modern personal computer is fundamentally incompatible software/user freedom? With the complexity of a modern CPU, it’s various attempts to overcome the issues related to memory-performance, it’s historical baggage, security issues and their solutions, it’s had become ever more unreasonable to speak of practical freedoms that the user might have. What if we were to have to recognise the modern PC as incompatible with our desires to be the subjects, not the objects of the man-machine relationship.
I would like to propose a thought-experiment. Imagine if, instead of the 80’s and the 90’s being marked by the rise of personal computing – the competition for the cheapest home computers, the most fool-proof user interfaces – this entire genre flopped. The reason is out concern. It can be anything from Bill Gates showing up too late for his IBM meeting, someone at Intel lost the design of the 8086 microprocessor, or Steve Jobs deciding to stay on his apple farm3.
In it’s place, a system of non-personal computers should have arisen. This might have been done by a mass-grassroots movement, or a coordinated, international government plan. While who would be behind it (and why) isn’t irrelevant, it would have had looked something like this:
Many might say, why this is just a internet cafe. In fact, when I attempted to put this idea into words a few months ago, this was precisely the reaction. Therefore, I’d like to take some time to clarify the differences:
Certainly many more points could be made, but since the situation I am sketching is entirely hypothetical, I don’t think it makes too much sense to go on furthering my point.
As to avoid the accusations of Who would fund such a loony idea on the one side, and How could have something like this have arisen considering the increasing movements towards privatisations on the other, I would like to directly state that I personally am very sceptical that anything like this could have even ever have arisen. There is no doubt in my thought that even if one were to admit the benefits of I would like to sketch below, it would have been quasi impossible to have made a convincing argument to introduce something like this. Just as unlikely would be the foresight needed to properly implement a possibly global system of this kind, in a clean and forwards-thinking manner.
Furthermore, even if we take this to be ever so improbable, but just so possible, I think that it’s just outright impossible for something like this to arise now, after all what has happened and evolved in the last 30-40 years.
Having have described the thought experiment, and made clear that I am in fact talking about a thought experiment, I would like the ask the question, what would be the principal differences between this world and ours. Not who’s better or worse. For this, I’ll go down every of the above listed points, describing this system, and explain why I think it is relevant.
On a local or communal level, all residents would have access to computer systems, and persistent accounts. […].
In case you didn’t notice, our current situation is something along these lines: Most people (in the west at least, and in parts of the far east) have one or more fully functional personal computer. They do part of their computing locally, and lately more and more by communicating with huge, remote systems. This computer can be anything from a desktop, laptop, to a tablet or a smartphone.
And I don’t want to make computing sound like a academic activity. It’s anything from writing a letter (that is to be digitally typeset), over reading emails, to designing a poster4.
The world our our thought experiment would differ in that the global
computer infrastructure would be a lot more distributed. We could
imagine that instead of most people using commercial Email services,
located somewhere, they would have email addresses bound to
their local computer place. Instead of millions of
@hotmail.com
, @gmail.com
, etc. addresses,
domains would have gone one reflecting geography beyond the national
level (maybe something like @kreuzberg.berlin.de
or even
further).
As new networks and protocols evolve, we could also imagine that these local computer places could offer some kind of HTML/HTTP hosting. Although I’m sceptical, it’s nice to imagine that federated social networks would have had more of an advantage, but I’ll say more about that below.
I hope the main point of this description is clear: A personal computer is incomplete, it is not capable of being a full member of the global computer network. For the real needs and wishes of most users, we require more permanent structures – servers of various kinds – to assist. But since the need arose while the technology was still developing, those institutions capable of offering the services that could satisfy these needs were few. Facebook, Microsoft, Google, etc. took advantage of the situation, and thereby cemented themselves firmly as integral parts of our international computer network.
What I am suggesting is, that if the practical resources had existed on a local and accessible level, before we realised the power of networking, the chance would have been much greater that we wouldn’t have developed a de facto dependence on those lucky enough to have played a major role in the early days – and thereby granting them the power to shape our future.
Instead of just having stuffed tables with PCs, one should rather imagine this in terms of a terminal-server systems. Lightweight clients share access to a larger, central, but local computer.
I have just commented on how in the end, PCs are “incomplete”. Chromebooks and similar devices seem to be the admission of this development. For whatever reason, even if we were to own and control our local computing, we are dependent on remote services. PCs are rotting in the shadow of their potential.
Our thought experiment, in all it’s foresight, would have avoided this issue, by inverting the relationship – or rather staying with what was the norm back until just a few decades ago. Terminal systems5 could mean anything from real “dumb clients” to just simple computers, which each would do most of their real computing on a nearby server-like machine.
This point affects hardware. We have a lot of PCs lying around. Many old ones get thrown out, sometimes just because parts are broken, or they aren’t up-to-date anymore. Everyone has to play a little system administrator, or find someone willing to. And as I’ve already mentioned in my opening comments, our hardware really isn’t that user-friendly, sometimes even hostile.
The thought experiment has a different situation, possibly with some advantages: Since the category of “consumer hardware” would play a far lesser role, we can imagine that that the requirements of backwards compatibility and a general requirement to maximally reduce costs for the sake of competition6 would have also been less significant. (Or so we will assume.)
We would have a situation with cheaper, less critical, possibly easier to repair hardware for common usage on the one hand, and more powerful, hopefully easier to maintain and upgrade hardware as the computational core, on the other. This would not only require less resources, but could ensure that our hardware is more dependable.
A hardware base that we could trust, would be the prerequisite for a real discussion about user and software freedom.
These local computer spaces would be maintained by a staff of people both serving as administrators and educators.
As I have just said, we require a lot more, small-time, more involuntary-than-not system administrators. In a family household it often just happens to be other that member that purchased the hardware, or sometimes it’s loaded onto a child with an affinity for computers (*cough*).
Either way, it’s infeasible for every one of these administrators to understand what is really going on. They are probably just more versed in the interfaces that most others. “My web browser won’t open” (because an instance is already opened but minimised), is an issue some people solve easier than others.
The result is that computers end up having to simpler and more limited than they could be. Automatic updates are necessary because you cannot explain the need for regular updating to the plebs.
Assume now that we don’t have to worry about making system administration accessible to everyone. Assume our lowest common denominator isn’t just about anyone with the money to buy a PC. Instead it could be assumed that (to some degree) capable, (to whatever degree) educated people would be in charge.
As mentioned before, security is often an explanation (or excuse) to restrict users. Hasn’t everyone heard of scams where people were tricked to infect their computer with malware? If we were to assume that there is always at least one responsible third party, what would happen to this argument?
My second point was that of education. As already mentioned when talking about internet cafes, the system administrator is usually an adversary, the one who restricts, mercilessly. I would like to imagine that in the world of this thought experiment, this wouldn’t be the case. Instead the administrators interest would be to advance the knowledge the users, help them become emerge from their self-incurred digital immaturity. It makes their job easier after all.
This would also play into overcoming the fear of computers. Dreaded is the black screen with white text! Who would be so malicious as to expose their users to such a horror? The computer has to be treated like a beast, shackled down before anyone could interact with it. On the other side there are those who want to look fear straight into it’s eyes, and indulge themselves in the impracticability of a raw computer system.
I am hesitant to say either is right. There is nothing do be gained from dogmatic insistence on these kinds of minimalism. But at the same time, I absolutely think it’s wrong to think that one should get to use a computer “for free” – in the sense that it is a slave that should conform to all of it’s masters wishes, without the user having to reconsider his/her questions at all.
It makes sense to understand the device, to understands it’s strengths and shortcomings, it’s powers and traps. Under the guide of a sensible administration and education, I think that this understanding could be more widespread.
People should not use these computers in isolation, in little cubicles or behind curtains. If there is no thrifty reason against it, using computers is a public, social act.
Anyone who has been in a computer laboratory, as they are found in universities, might recognise some of the images I am describing here. Personally I am split on how similar this would be, since there are many different kinds of computer laboratories, and they are all influenced by the kind of work done there, and the people visiting them.
In the end, computer laboratories are still more of an exception than a rule, and as at universities, more and more students have their own laptops, their roll also changes.
Our understanding of computing ends up being understood as a kind of individual activity. When people use computers, they separate themselves from their real society, and escape into it’s virtual equivalent. There are those who even form addictions, social media and video games are examples that are often cited.
Now, if we think about what could be different in this thought experiment, if using a computer requires a conscious act of leaving ones home and entering into a common space, what would this change?
I remember reading somewhere7 that in old computer laboratories, it would have been very weird to have a library-like culture of silence. Instead one would have a lot more usual to hear people talking, sharing tips, collaborating, joining in on conversations, and so on. Now I don’t know if this is true or not, nor do I know if I were to like this or not. But something about this image prevents me to stop thinking about how it was like.
Some might have difficulties imagining everything I have been writing about up until this point. That’s all nice, until some punk starts breaking stuff, they might want to insist. I would like to step back and raise the counter-question: What if instead of just being a room with computers and wires, a computer space were to be a real social hub? Possibly even a kind of secular church?.
This raises even more questions. If a computer place would be a real social centre, it would be hypocritical to let them devolve into boys clubs or bars. This is probably the most Utopian points I have raised until now, and I do not deny it. There is something very suspicious about thinking that computers could server as a common point of reference for society, I fully admit it.
Yet even while I am sceptical of my own words, I would want to believe that computers and digital networks could server a genuinely positive role in society, not leading to isolation and animosity, but instead extending and assisting us in our humanity?
Weird.
I know there are many more questions one could ask. What would have the commercial role of computer have been? How would software be developed8? Would all of this be irrelevant because offices still would require workstations of their own, or would they too partake in the system? What would our conception of digital privacy be? What would happen when administrators moved away? Could a computer place be abandoned? Or even bought up? Would smartpones exist, and how would they relate to this system? Would YouTubers exist? How would the difference in the structure affect the content of our computational activities – and our understanding?
Ultimately most of these questions time in to the details of this story. I don’t know how meaningful it is to discuss these things, as the counterfactual assumptions I have made, especially regarding such a historically important thing as “Personal Computers and the Internet”.
In the end, I don’t think this text has any real purpose. At best, you think I’m a crazy idealist, who dreams of worse worlds, not capable of cherishing the luxury we have. At worst, you might believe to see the poverty of just this luxury.
For certain definitions of “we”↩︎
Just in case: When I say Personal Computers, PC, I am not just talking about desktops/laptops running Microsoft Windows, but any device that one personally owns, and uses for personal computations, regardless of producer or operating system.↩︎
I am totally conscious that these are oversimplifications of the history. But again, that’s not what I am interested in↩︎
If you are not satisfied with calling all these things, especially networking-related activities, “computing”, imagine all networking-capable devices as part of one giant Turing-Machine simulation. Doesn’t really change anything.↩︎
I am not talking about Command Line Interfaces, like what some people call the UI found in terminal emulators. A different synonym would be “Thin Clients”.↩︎
Just look into the architecture and history of a modern CPU – especially the x86 family – to understand what I mean.↩︎
This is not a stylistic device. I actually can not remember where I read (or perhaps even heard) this. If you recognise what I am talking about, and know where it’s from, please tell me!↩︎
Personally I think that free software would play a much more central role, not only due to the structure and culture, but also to maintain independence↩︎