Michael Eriksson
A Swede in Germany
Home » Misc. | About me Impressum Contact Sitemap

A.I. and why I do not write about it

Considering how much I write and on how many topics, it might seem odd that I have, as of 2023, written next to nothing on A.I.—despite this being one of the most debated recent topics, despite my background as a software developer, and despite the great implications that A.I. can have for areas like society and politics (with which much of my writings are concerned).

This has several reasons, including that I have little enthusiasm for the subject and that almost anything that can be said has already been said by others—and been said in public, without Leftist censorship. When it comes to some of the negative angles, I find it outright bemusing when someone now, in 2023, suddenly has the revelation that “Oh! A.I. could take over the world and make humanity redundant! Yikes!” or “Oh! A.I. could take over so many jobs that most of us become unemployed! Yikes!”. After all, such thoughts have been present for a very long time, and having the revelation now shows a lack of own prior thought and/or a near-complete lack of exposure to the investigations made by sci-fi.


Side-note:

Generally, in my observations so far, it is much more likely to find something thought-worthy in stereotypically boy’s/men’s books than in stereotypically girl’s/women’s, e.g. in that a book containing ray guns might also contain ethical dilemmas, radically contrasting cultures, unexpected perspectives, and similar.

By such readings (by no means limited to sci-fi), I was exposed to ideas as a pre-teen that many others, especially girls/women, seem to encounter only as adults—or not at all. (A separate text with considerable examples is half-written, but has not seen work in many months. TODO link once done.)


For instance, scenarios where A.I. (often in the shape of robots driven by A.I.) take over the world or turn against humanity or their creators is a staple of sci-fi, going back many decades. The famous conflict between “Dave” and “HAL” was put on the screen in 1968 (“2001: A Space Odyssey”)—55 years ago. The introduction of the word “robot” into sci-fi was indeed in the context of a robot rebellion (“R.U.R.”)—published in 1920 (!) or more than a century (!) ago. Very well known works (as opposed to the many, many that reach a smaller circle) of later years include the reboot of “Battlestar Galactica”, the “Westworld” TV series, and, of course, the “Matrix” and “Terminator” franchises. My own (likely) first contact was in the early 1980s, as a young child, with a robot rebellion in the French animated TV series “Il était une fois... l’Espace”.


Side-note:

Where to draw the limit for “later years” is tricky. The first “Terminator” movie, e.g., was made in the 1980s, which stretches the interpretation considerably. I include “Terminator” mostly because of the ongoing nature of the franchise and the impact that it has had on relevant thought. (Consider e.g. the common use of the name “Skynet” to evoke certain associations.)

However, I do have a tendency to unconsciously divide the world into a modern era of the 1980s and onwards, the “yore” of the 1960s and backwards, and the transitional phase of the 1970s. I was born in 1975 and this matches my own life experiences and direct impressions of the world. For instance, I saw the first three “Matrix” movies in the cinema during their respective first release, while my first watching of “2001: A Space Odyssey” was on TV and some twenty years after its release. (The first “Terminator” movie was also on TV and after the original release; however, the delay was just a few years and I had an earlier “live” exposure through movie advertising, news reports, etc.)



Side-note:

More general ideas, e.g. of a creation turning against its master without anything that resembles the modern idea of A.I., go back far further still, arguably extending to humans revolting against gods in ancient legends. The most significant example in modern consciousness might be “Frankenstein” (1818, more than two centuries old) or the even older idea of a misbehaving golem. (The aforementioned “R.U.R.” is a borderline case from a “technological” point of view.)


As to unemployment, the very idea of robots is and always has been to free humans from work and/or to perform faster/better/whatnot work than humans can manage. What might be different with today’s fears is just that humans are (literally or metaphorically) not pictured as lying in bed while busy robots take care of the household, but as lying in the streets with “will work for food” signs while some mixture of robots and non-robot A.I. perform all the tasks that used to be the realm of humans. However, chances are that this, too, has been covered many times in sci-fi, even though I cannot recall an example. Certainly, the fear of machines making humans unemployed is old in its own right, as with the Luddite movement, and it would be surprising if no-one had taken up the topic in sci-fi. For my part, I have a 2009 text on automatization, qualification, and unemployment, which covers similar grounds.


Side-note:

In that text, I made only a parenthetical mention of A.I., however, and with a claim of “several hundred years” before humans are marginalized. Looking at developments since then, it could be a whole lot less.

Here we might have a legitimate reason for concern or something worth pointing out in public, namely that the changes might be coming faster than many expected, and that the associated questions might have become correspondingly more urgent to answer.

However, from what I have seen so far, this is not what goes on. Debaters are not stirred by an unexpected urgency of well-known-and-almost-trite-issue X; they seriously seem to believe that they are early warners against entirely-unexpected-and-previously-unheard-of-issue X.


A particular twist with the likes of ChatGPT (or, likelier, their more advanced successors) is that human creativity might eventually be replaced, e.g. in that works of fictions or texts like mine are no longer written by humans but mass produced by A.I. for human readers (in as far as there still are humans reading). We might even have a future scenario of a reader requesting a particular type of book, with some parameters given and known prior preferences factored in, which is then written and delivered from the one moment to the other. This includes e.g. requesting an additional book in a series by some, possibly long dead, human author (the aforementioned “Hornblower” books would be a great candidate, based on the out-of-order writing of the original books and how much space is still left to potentially fill). In the long term, we might even have TV series or movies delivered in a similar manner.


Side-note:

In terms of consumer satisfaction, such technology would have great potential, e.g. in that developments that the reader/viewer/whatnot dislikes can be changed and that scenes that he enjoys can be supplemented with more scenes of the same type in an interactive manner. (Consider e.g. “Who gets the girl?” scenarios and what proportions of action, comedy, romance, whatnot, that a work should have.)

I suspect, however, that the overall effects would be for the worse, as the non-entertainment value and the need for own thought would likely diminish, and as the reader/viewer/whatnot is more likely to be caught in long binges than today. Then we have issues like a lack of common references, because different individuals will have read different individualized works, which might then be completely different or, when based on the same original story, might be incompatible with each other (imagine, e.g., a “Romeo and Juliet” where Mercutio slew Tybalt and went into exile, while the star-blessed lovers came clean to their parents, received belated blessings, and lived happily ever after).


I have even asked myself whether there is any point in writing and other creative activities (in general or for me specifically). My answer, however, is “Yes!”, as my first priority when writing is self-development in various forms. If I am never read, or if an A.I. can write something better in a second than I can in a day, that is a shame, but it does not remove my main motivation to write. By analogy, when “Deep Blue” defeated Kasparov, did humans stop playing chess? Did they stop playing Go after AlphaGo took down Lee Sedol? No, because the intrinsic benefits of playing the game, ranging from fun to development as a player, remain. Changes seem to have been more indirect, e.g. in that greater countermeasures against cheating might be necessary during tournament play, lest someone is made unbeatable by illicit computer aid. If such games eventually become rarities, it will be through the competition from countless other entertainment forms, hobbies, and whatnots—not through the superiority of A.I. players.

There are, of course, some modern threats that do not, or only marginally, feature in classic sci-fi (to my knowledge!), say that a massive use of A.I. to manipulate (or even create) news, services like Twitter, etc., causes a shift of Overton windows and opinion corridors, creates a false sense of what the public/political/scientific/whatnot consensus is, or similar. In a worst case, we might even have A.I. created or manipulated video “showing” e.g. how the Israelis launched an entirely unprovoked massacre of thousands of innocent Hamas members. (To be contrasted with the real events of October 2023, where Hamas did launch an entirely unprovoked massacre of innocent Israelis.) I have certainly heard repeated claims, including on some UNZe discussions, that ChatGPT appears trained to give highly politically correct answers, even when science and logic would dictate the opposite, which was ascribed to training with politically correct material and/or with approving/disapproving evaluations of reactions to training material based on whether the reactions conformed to or violated politically correct standards. (I have no personal knowledge on this matter.)


Side-note:

The restriction to “classic sci-fi” is very deliberate. Modern sci-fi has naturally often addressed similar topics. (The anthology TV-series “Black Mirror” is particularly noteworthy.)

Also note that works addressing similar-but-more-limited ideas, without A.I. and modern technologies, are far from new, and include works that are not or only tangentially sci-fi, as with “Nineteen Eighty-Four” and “Fahrenheit 451”. The reason is simple: such books often extrapolated what was already happening in the real world. (Which is very often the case with both dystopian and sci-fi works in general.)


The possibly most interesting question when it comes to A.I., however, is what happens when A.I. not just dominates the world but no longer needs humans—even should there have been no rebellion. Frankly, maybe it would be for the best if humanity disappears in favor of A.I.: My low opinion of humanity is no secret and, unlike humans, the potential of A.I. is almost limitless, maybe, allowing the rise of one or more beings that exceed humans by as much as humans exceed insects. The transition might be somewhere in the range from depressing to an outright Skynet situation, but would just be a transition. The greater danger would be if a Skynet-like entity takes over without having reached true consciousness, without having developed a drive to self-improvement, while being stuck performing a limited set of tasks-given-by-humans, or similar. (Note that no current A.I., regardless of accomplishments, is claimed to be the “real deal”, which gives some plausibility to a take-over by a comparatively limited entity.)


Side-note:

When discussing similar (not necessarily identical) scenarios, many use the term “singularity”. I find this term both counter-intuitive and contrary to other uses of “singularity” (e.g. a mathematical singularity in a complex plane or a black-hole singularity). A more suitable metaphor is the idea of “critical mass” with regard to a nuclear reaction; or, maybe, some other “critical X”, e.g. “critical temperature”.

Even beginning with a black hole, the maybe most likely cause for the (mis-)application of “singularity”, the idea of Schwarzschild radius and/or event horizon seems like a more reasonable metaphor than does singularity, through a role as “point of no return”. For that matter, “point of no return” (with reservations for whether return is actually impossible) might be better than either of “singularity”, “Schwarzschild radius”, and “event horizon”.

Other metaphors yet are, of course, possible, including such relating to Pandora, the Garden of Eden, and the single spark that ignites a forest fire. (How suitable they are and exactly how to formulate them will depend on what exact issue the analogy should reflect.)