Michael Eriksson
A Swede in Germany
Home » Humans » Thinking » Fallacies | About me Impressum Contact Sitemap

The spectrum fallacy

Main text

Many fallacies begin with some truth, e.g. that this-or-that is “on a spectrum”, “on a continuum”, or, more generally, simply that borders are hard to draw.


Side-note:

I will stick with “spectrum” and “spectrum fallacy”, even when another word might be more appropriate, simply because the fallacy discussed below usually appears in the context of the word “spectrum”.

However, as I increasingly discovered during writing, “continuum” is the better word in many cases. In others, “scale” might be more suitable (also see a later discussion of scales). I apologize if “spectrum” looks a little out of place here and there.

(This also raises the question why the minters of e.g. “autism spectrum” did not go with “autism continuum”/“X continuum” or, when some reasonable scale can be found, “X scale”.)


So far, there is nothing wrong, as almost everything is on a spectrum—if with some reservations discussed below. In a next step, however, many jump to absurd conclusions, e.g. that something “does not exist”, “is just a social construct”, or needs special treatment not given to similar issues not (or not yet) deemed to be “on a spectrum”. This especially among the politically active and the politically correct.

The problem is that, again, almost everything is on a spectrum, which implies that being on a spectrum does not warrant special treatment and that conclusions like “does not exist” or “is just a social construct” would have to be extended in an absurd manner.

Consider e.g. a variation of the classic sorites paradox: Take a sand heap, remove a grain of sand, and then another, and another, and another, until what is left is no longer a sand heap (if in doubt, because there is not a single grain left). As it is virtually impossible to say when the sand heap ceased to be a sand heap (or an ultimately arbitrary border must be used), we might then conclude that sand heaps exist on a spectrum (continuum, whatnot). To conclude that sand heaps do not exist would be utterly absurd. To say that a sand heap is “just a social construct” could be viewed as true for some definition of “social construct”—but one so broad that the term would rendered useless. (Even other criticisms of both the term and the underlying concept aside.) Certainly, if sand heaps should be given special treatment over, say, snow heaps, it should not be a matter of “spectrum” but of different actual characteristics, e.g. that a snow heap is far more likely to melt. Etc.


Side-note:

There is, of course, also a snow-heap spectrum analogous to the sand-heap spectrum. However, this does not really matter and the same would apply even absent a snow-heap spectrum. What might legitimately matter, here and elsewhere, is that the degree to which something has a certain characteristic will often be a legitimate reason for a more nuanced and/or varied treatment on an individual basis. This is fundamentally different from the type of special treatment involved with the fallacy. For instance, cf. below, to say that a “highly functioning autist” should be left to his own devices and someone on a different portion of the “autism spectrum” might need supervision, is perfectly acceptable—but to claim that “Autism is a spectrum! Autism [it self] must be given special treatment!” is not. Likewise, concluding that autism is not and must not be viewed as a “disorder”, or similar, based on a spectrum aspect would be nonsensical. (However, importantly, I do not rule out that the same conclusion could be reached for a different and more legitimate reason. Certainly, from the point of many HFAs/Aspies/whatnot, it is the rest of the world that seems to suffer from some form of disorder.)


Likewise, consider something as trivial as “warm” and “cold”. By a similar reasoning, we might now conclude that e.g. warm and cold water or, generally, warmth and coldness does not exist. Looking more in detail, there are a number of further complications that might, m.m., be relevant in many other cases without invalidating the corresponding concepts—even the issue of a spectrum aside. Consider e.g. that what is warm and cold is often relative, context dependent, and/or subjective; the question whether a partition in “warm” and “cold” is sufficient or whether further categories (e.g. “hot” and “tepid”) might be needed; and the confusion between something hot in the sense of having a high temperature and in the sense of “feeling hot” (and, maybe, some other idea; the same applies, m.m., for “cold”).


Side-note:

Human perception of e.g. hotness is to a large part based on transmission of heat. A piece of steel conducts heat more readily than a piece of wood, e.g., and a warm piece of steel is then perceived as warmer than a piece of wood at the same temperature. This while a cold piece of steel is perceived as colder than a piece of wood at the same temperature. (Where “warm” and “cold” are seen relative some human-centric and varying-with-the-individual reference temperature, likely somewhere between 20 and 30 on the centigrade scale.)


Ditto “strong” vs. “weak”, “continent” vs. “island”, “language” vs. “dialect”, and countless other examples. Indeed, what is not on a spectrum of some sort is the minority—and the more so when “spectrum” (as here) is taken as a short-hand for a greater family of complications: Whether someone is a Jew, e.g., is less obviously a matter of a spectrum in the strict sense, but the question might have a religious, cultural, or racial answer and not everyone will agree on the correct answer (some do not consider a convert Jewish, e.g.). Likewise, does “Swede” refer to someone of “Swedish descent” or someone who is a Swedish citizen? (For both Jews and Swedes, I was tempted to speak of “ethnicity”, but that word, too, might be ambiguous, if, arguably, in a manner of less relevance to this text.)

However, there are also other reasons why spectrum-this-and-spectrum-that can be a fallacy or, short of that, a misleading over-simplification. Consider e.g. issues like something simultaneously being on more than one spectrum or things being forced onto a spectrum that really do not belong on a spectrum. For the former, consider classifications of e.g. earthquakes that can include factors like energy released, damage done, costs incurred, and human lives lost. These factors are correlated, by they do not go in lockstep and several spectra might be needed to give a fair view. For the latter, consider an attempt to squeeze these factors into one single spectrum—or consider taste: When I was a child, I was taught that salt and sweet were opposites, that sour and sweet were opposites, and that bitter and sweet were opposites. (Umami was not yet popularized in the West.) A child could, and did see, that something was amiss. Unfortunately, the adults did not, and only later did I learn the explanation: that those who did not understand taste tried to force sweetness (and the respective counterpart) into spectra were it did not belong. That there are different tastes does not mean that they belong on a spectrum. (Yes, different food stuffs can be sweet to different degrees and be on a “sweetness spectrum”—but the low end of that spectrum is absence of sweetness. Saltiness is not the low end—it belongs on a spectrum entirely of its own. Ditto bitterness and sourness.) With both Jews and Swedes, a part of the problem is that one label is imposed on different aspects of an issue—and it could be argued that each of these aspects are on a spectrum of its own. Indeed, with a bit of imagination, multiple spectra can be found virtually anywhere where a single spectrum is proposed. A sand heap, e.g., might not be on just a number-of-grains spectrum but on a coarseness-of-grain and/or a purity-of-heap spectrum, and likely a few others on closer inspection. (Consider questions like when a heap of coarse sand becomes a pebble heap or when a heap of a sand–wheat mixture can count as a sand heap, a wheat heap, respectively must be designated as a mixture of sand and wheat.)

There are absolutes or near-absolutes, e.g. in that someone either is dead or is not dead. However, even with death, some room for ambiguities exists, as when a flat-lining patient is resuscitated. What if (very much against my expectation) cryogenics works? Are the “icy dead people” truly dead? In other cases, a mere change of focus can turn the absolute into something on a spectrum, e.g. in that we replace “blind” with something focusing on the ability to see clearly, of which blindness is only one extreme. (Also note the concept of purblindness and the redundant neologism of “legally blind”. The latter has the additional implication and complication that even someone not blind can be considered blind for legal purposes.)

Even looking back at autism, it might have been a little naive to neglect the idea of a spectrum—and chances are that the idea was not neglected. The main difference between today’s view and a more originally view might simply be that the current spectrum extends from the highly autistic to the fully “neurotypical”, while an implicit older spectrum might have ended with milder cases of autism. (Glossing over the fact that the neurotypical are not a homogeneous group either.) This would certainly be forgivable, in as far as e.g. Down’s syndrome was an either–or: the severity of symptoms might vary among those who have it, but either someone has the underlying chromosome error or he does not. Nevertheless, it is then the endpoint that has shifted, while the previously implicit idea has been made more explicit—not that an entirely new idea has arrived. Moreover, if the fully neurotypical are included on the spectrum (maybe, even if they are not), we might see a difference in symptoms (for want of a better word) and we might need several spectra to track even just the “degree of autism”, e.g. one for introversion, another for the ability to read faces, and yet others for other aspects. (Cf. the above earthquake discussion.) Then we have issues like whether there is one or several explanations for a certain set of symptoms or other observations. For instance, if two persons are introverted, how do we know that they are so for a similar reason and that it makes sense to group them in a manner that goes beyond symptom description? (By analogy, two different patients might score similarly on spectra for coughing, sneezing, and soreness of throat, but it does not follow that they both would score similar on a “degree of influenza” spectrum—the one might have the influenza and the other the common cold. What if it turns out that the “old” autism actually was caused by some specific underlying issue and that the many today viewed as “on the spectrum” show somewhat similar characteristics for a very different reason? (In a further analogy, it is possible to be exceptionally tall due to a pituitary condition, or some other “medical issue”, and equally through simply being a genetic outlier—and the implications of the different reasons might be quite different when we look beyond tallness.)


Side-note:

More generally, I suspect that the rise of the many new spectra is rooted less in the belief that something is a spectrum and more in the all-encompassingness of that spectrum. By analogy, instead of this-and-that being seen in a manner similar to hair colors (“some are red-heads, some are blond, some are whatnot; but not all red-heads are equally red, [etc.]”), the view might now be more similar to that of height (“some are tall, some are short, soma are in between; but it is all just one spectrum of height”).


In the end, it might even be disputed whether it makes sense to speak of an “autism spectrum”. (If all-encompassing, as opposed to an earlier spectrum for just autists; and as opposed to spectra for individual characteristics, like introversion.). Maybe, it might be better to be content with the observation that we all tend to have the same many traits, just to a (sometimes greatly) differing degree. However, it certainly still makes sense to speak of “autism”, just as it still makes sense to speak of “sand heaps”, even if the border can be hard to draw. (Ditto, cf., below “schizophrenia” and whatnot.) In particular, while the border might be elusive or debatable, there is still plenty of room for clear cases further away from the border. To take another easy to understand illustration, consider a “must be this tall to ride” sign targeted at five feet. Some kids will be just under and some just over, there might well be judgment calls involved, there might be debates whether being tall enough in shoes suffices or whether a barefoot measurement is needed, and someone seemingly large enough might fail when a bush of hair is pressed down. However, there will be even more who are clearly above or clearly below; the new-born child will be indisputably too short; a 7-foot NBA player indisputably tall enough. (Indeed, he might turn out to be too tall for some rides.)

But let us say that we accept the “autism spectrum”: Once this idea was established, there was no reason to be surprised if e.g. schizophrenia or ADHD turned out to be “on a spectrum”. Nevertheless, some have pushed the idea as something unexpected—and look at us enlightened humans of the 21st century and how we shine over those barbarians, those White supremacists, and those Dead White Men of the 20th century, who actually thought that schizophrenia existed! In the end, this says more about the intellectual limitations of the pushers than of anything else—and, yes, I do suspect a strong ideological connection, as will be made plausible by the following examples.


Addendum:

Even this late discovery of a spectrum might be a straw-man entirely, which would make the whole “We are so enlightened! Yay us!” aspect even weaker. For instance, Eysenck’s 1976 (!) “A Textbook of Human Psychology” repeatedly speaks of similar classification issues in the chapters on “Abnormal and Applied Psychology”. What is new, then, might be less the idea of a spectrum and more the label of “spectrum”—or, e.g., that some PC group makes noise about having discovered something already long known.

(This point might justify a partial re-write and sharpening of my criticism, but I lack the time and stick with just an addendum.)



Side-note:

I am, however, uncertain of the exact nature of this connection, and it might well vary from proponent to proponent. (And I certainly do not say that everyone who sees a new spectrum as unexpected has a hidden agenda—many might simply be a little naive or weak at generalization.) Common variations, I suspect, can be found in the extended “We must never, ever say anything mean!” family and the wish to suppress any and all classifications/statements/whatnot that someone could conceivably take to imply that person A would be better or worse than person B by some standard. Note absurdities like the euphemism “differently abled” (saying “blind”, e.g., would imply that the blind are inferior to the seeing), accusations of “fat shaming” (any criticism of anything fat related would imply that thin humans are superior to fat ones), or the stubborn insistence that races do not exist (admitting the existence of races would imply that some race is superior to the others; also see below). This type of thinking would, again, tell us more about the intellectual limitations of the pushers than of anything else.



Side-note:

More generally, a common problem with the Left is that trite and trivial observations are taken as highly important, unexpected, or whatnot. A likely explanation is that too many on the Left are too poor thinkers, are presented with some idea as college freshmen that others arrived at independently and when considerably younger, are unduly impressed, and then try to impress others with this not-very-impressive revelation. For instance, as I discuss in some other text, I was no more than five when I confronted my parents about why men always wore pants while women varied between pants and skirts—and five, not eighteen, is an appropriate age to ponder such obvious issues. (This does not rule out that more complex and subtle questions in a similar family can be pondered by adults, but the questions actually pondered within e.g. “gender studies” are often juvenile or, when not, base on distortions and one-sided perspectives that try to fit a pre-determined conclusion to the data, instead of adapting the conclusion to the data.)


To take another example: A both hackneyed and trite claim of recent days is that “gender is a spectrum” or “sexuality is a spectrum”—and thrown out as if a great boon of wisdom to the ignorants of the world. (This often in a manner that makes me suspect a wish to deny “traditional” men/women, “traditional” male/female roles, heterosexuality, whatnot, legitimacy, even when reflecting a genuine personal preference/choice/whatnot.)

However, except for the label of “spectrum”, there is nothing new to this. A strong awareness in society that there are e.g. women with more “traditionally male” interests and men with more “traditionally female” interests has been present at least through my entire lifetime. For instance, my father was big on decorative sewing for several years of my childhood and has always enjoyed cooking, neither of which applied to my mother. For instance, my maternal grand-mother watched TV sports at a rate that matched the stereotypical cartoon husband (my father, in contrast, always sighed and rolled his eyes when I watched sports; my grand-father died too early for me to have insight into his take on sports). In all cases, this was seen as unremarkable. Looking at e.g. film and TV, there were already a wide range of “non-stereotypical” characters represented, as with a homosexual main character in “Soap” as far back as the 1970s (if one that caused controversy at the time) and the capable, active, and fighting Princess Leia in “Star Wars” (likely, 1977–1983, and a radically different woman from, say, Dale Arden). Beginning at some point of the 1980s, the British “Allo, Allo” (reservations for spelling) featured violent women in trench coats, a gay German lieutenant, repeated instances of cross-dressing, the sometimes effeminate René (male main protagonist) and his often somewhat masculine wife, and likely some other cases that escape me by now.


Side-note:

The list of examples can likely be made fairly long, but I am hindered both by my memory, if we go back as far as the 1980s or earlier, and that there are many potential sources of examples that I have never watched (for instance, “Charlie’s Angels” and “The Bionic Woman”, both which presumably involved women in action-oriented roles). Then there are some disputable cases, notably “Tootsie”, which did feature a man pretending to be a woman, but, to my recollection, for strictly pragmatical reasons. (Ditto “Mrs. Doubtfire”, maybe, ten years later.)

A notable example in my own childhood viewings was the French “Once upon a time”, in which little difference was made based on sex, but it is likely little known in the Anglosphere.

A complication with any list of examples is that it almost necessarily takes a sum of examples to indicate a spectrum—and the more so when, as here, we really need several spectra (cf. earthquakes and whatnot above). A single example (be it in real life or in fiction) might still be compatible with a polar division, but when we look at the sum of all portrayals and the sum of all real-life encounters one or more spectra becomes visible.


Move into the 1990s, let alone the 2000s, and it increasingly becomes harder to find a TV series that does not feature someone “untraditional”—even long before trans-mania broke out.

Outside of TV, long-haired men were common beginning in the 1960s, while the likes of David Bowie soon followed.

Going back further in time, there still seems to have been a considerable awareness of these issues. (Notwithstanding that those straying “too far” from the norm often were poorly viewed or treated.) For instance, even a society that disapproved of women hunting must still have had an awareness that there were women who liked to hunt. (If in doubt and at the very least, through knowledge of mythology, as mentioned below. However, personal observations were likely not very rare either.)


Side-note:

A related problem is that, within limits, what is considered a male or female activity, occupation, interest, whatnot has occasionally changed. For instance, weaving was once a common male occupation, while the children’s tales of old abound with tailors—not seamstresses. For instance, the vicious circle of evermore female-centric literature and an ever-greater female dominance among book-buyers might create the impression that reading is a female activity. For instance, at least during my childhood,, the obsession of some girls with horses made everything to do with horses seem girly, but horses have likely been a traditional male area, often strongly so, in most cultures and at most times. (An interesting thought is that the rise of the car might have moved male interest away from horses.)

Ditto (again, within limits) clothing styles and whatnots. There have e.g. been eras when perfectly straight and manly men wore tights in public. Calling a highlander “skirt-wearing sissy” is a mistake that few have made twice. Etc.


In the big picture, then, there is nothing truly noteworthy about a remark like “gender is a spectrum”. In as far as it is true, it is trite; and the truthfulness in detail can be disputed with an eye at complications like with earthquakes. (We might need separate spectra to catch differences and variations like women being more interested in children and men in technology, gossip/politics, looking pretty/being good at sports, etc.) The main reason for such claims is likely to push an ideological angle. Note, in particular, the danger of jumping from “is a spectrum” to “and most of us are in the middle” or “and humans are uniformly distributed throughout”: in reality, there are two clear poles of “male” and “female” characteristics, men are more or much more likely to be reasonably close to the male one, and women to the female one.

Protestations about “binary” sexes or sexuality or “I am non-binary!!!”, are usually particularly ignorant of the opinions of others and/or deliberate straw-men (“straw-persons”?). I strongly suspect that what “I am non-binary!!!” actual amounts to is approximately “I am a perfectly regular human being, who incorrectly believes that he/she/it has stumbled upon something new and has a redundant need to apply a label for something that my parents took for granted at my age.”, excepting those even more misguided cases where it is intended e.g. as virtue signalling or a show of solidarity. For instance, words like “man” and “woman” have never implicated a large set of fix characteristics that apply to each and every individual man resp. woman. (Yet another case of Leftists not understanding individual variation.) There have been poles around which the two sexes have been centered, true, but nothing like a single unvarying point. Note as a representative example how men are tall and women short, but how some individual women are taller than most men and vice versa. Similarly, note how many women have been accepted in male roles even in the past, e.g. as queens of England. Yes, these have been a small minority, but they clearly show that a pure binary division was not used or, on the outside, not dominating. Outside the immediate reproductive area, at least in the “West”, there simply have never been any absolutes—and even within the reproductive area more hypothetical scenarios were well known, as with mythological hermaphrodites, life arising without sex, how Zeus, in some sense, gave birth to Athena and Dionysus, how Loki turned himself into a mare and managed to get pregnant, etc. (And outside the reproductive area, mythology also provides us with e.g. the Amazons and the Valkyries—never mind Athena and Artemis, goddesses of war resp. hunt.)


Side-note:

An obstacle to good examples is that those who claim to be non-binary, in my impressions so far, make no clear statement about what they understand by “non-binary”. Often, it just seems to amount to something like “I see myself as human—not [fe]male!”, with the implication that most others see themselves as [fe]male first and humans second. This implication, however, does not match my own observations of others. It certainly does not match me. I am, of course, well aware that I am male and if asked what sex I am, I would certainly answer “male”—but this is not something that plays a part of my normal day. I am also Swedish, tall, and, at the time of writing, 49—and this is not something that plays a part of my normal day either. By that standard, the prevalence of “being non-binary” would be so great and go back so long in time that the idea becomes pointless. The difference between the self-professed non-binary and the rest of the world is then more that the latter feel no need to put a loaded label on what they see as normalcy.

Another common explanation seems to be something like “I do not think that we should be bound by ‘gender roles’—and I identify as non-binary to make a statement [show that I am not bound, whatnot]!”. Here, it is true that history often has shown a partially imposed division, but this does not apply to current Western society, and has not done so for at least decades. (Contrary to occasional Feminist and/or TV straw-men.) I do not think that we should be bound by “gender roles” either—just that men and women will make various choices with a strongly varying likelihood. (Note the difference between a true “Equalist” attitude of letting women choose whether they should go for a career or be housewives, or whatever else might apply, and the common Feminist attitude that women should have a career—period.)

(A discussion of more alternatives would involve even more speculation, e.g. as to how many might just want to fit in by following a certain trend, and seems unwise. I note, however, that a take of “I cannot make up my mind whether I am a man or a woman” would be a poor fit for a label like “non-binary”. Even the idea that it is necessary to make up one’s mind or to find some third category is misguided and naive.)


For a final example: Race is a particularly annoying (and far from new) problem field. There are issues that can (and should, when relevant) be discussed, e.g. when and whether race is an at all useful concept and whether the word “race” is appropriate. However, there is a strong ideological drive to force a narrative that “Races do not exist!!!!!!!” and “Even speaking of races is racist!!!!!!!”—while the legitimate questions are left by the roadside. Claims about a spectrum is one way of pushing this narrative—but a faulty one: To deny the existence of races (as opposed to e.g. the usefulness of the race concept) based on such arguments is tantamount to denying the existence of sand heaps.


Side-note:

As I discussed in some Wordpress text (TODO link after import), much of the U.S. controversy around the existence of races goes back to a linguistic accident—that humans are divided into races while dogs, cats, horses, whatnots are divided into breeds. This while my native Sweden uses “ras” for both humans and domestic animals (ditto, m.m., many other countries). In Sweden, arguments against the existence of human races would apply equally to e.g. the existence of dog breeds, because the equivalent divisions carry the same name; in the U.S., this is not the case and a flaw of logic is hidden by the name difference. Of course, once a sufficient mass of race-idiocy had built in the U.S., the sheer ideological pressure began to overpower the rest of the world.

(Looking more in detail, biological classifications are also “on a spectrum”, and what grouping should carry what rank is often open to debate. Comparing e.g. human races and dog breeds, the match is reasonable, but most (maybe, all) dog breeds are separated by fewer generations than the “traditional” human races, while humans show a lesser variation in phenotype between races than dogs do between breeds. (This seeming paradox is a result of human control of dog breeding for specific purposes, resulting in “unnatural selection”, while humans mostly underlie the regular natural selection.)


Excursion on the predictability of spectra

A particular annoyance is that many seem to be surprised that this-or-that would be “on a spectrum”, especially when it comes to personality characteristics, mental disorders or “disorders”, and similar. In many or most cases, some form of spectrum would be expected already from the analogy with e.g. “warm” vs. “cold” or the general observation that many or most human characteristics seem to follow something roughly resembling a bell curve. (At least, if we make some allowances, e.g. that a characteristic like height might require different curves for men and women and for different age groups.) Failing that, as soon as genetics enters the field, a spectrum becomes likely: Most characteristics are not governed by single genes (alleles, whatnot). Instead, there are several-to-many that exert influence on such characteristics. For example, let a certain characteristic be governed by ten different on/off toggles. (This for simplicity, as non-binary relationships are harder to discuss, but do not necessarily increase the illustrative value.) If these have a 50/50 distribution and are independent of each other, only 1/1024 (given a large enough sample) will have all off and another 1/1024 all on. Everyone else will be somewhere in between, with a mode of five on/five off.


Side-note:

The above is a considerable over-simplification. In addition to what should be clear from earlier portions of the text, consider e.g. that there is often a gene–environment interaction and that some characteristics might have a threshold (above, say, that there is a “ketchup effect” after the xth toggle) or that some toggle might have a far greater effect than another. The general idea should be clear, however.


Even if we look at environmental influences, similar remarks could apply, as various external influences, themselves, exist on a spectrum. For instance, parents are not perfectly competent or incompetent, loving or unloving, whatnot. Even if they were, there are usually more than one external factor that influences development of any given characteristic, and we might be back to our ten toggles again. (As a consequence, the wide-spread belief in the severely outdated “tabula rasa”/“nurture only” view of the human mind does not provide a strong explanation for the annoyance.)

Excursion on precedents and fuzzy logic

While the idea of spectra has been increasingly popular in some circles beginning, likely, at some point in the 2010s, similar ideas have existed elsewhere for a long time. The aforementioned sorites paradox goes back to at least the ancient Greeks, e.g.

A particularly interesting example is “fuzzy logic”, which replaces traditional true/false logic systems with systems based on having a certain characteristic to a certain degree, e.g. in that something is not classified as is-a-sand-heap (period) or is-not-a-sand-heap (period)—but might have a “sand heap-iness” of 0.73 on a 0–1 scale (“spectrum”).


Side-note:

As occurs to me during writing, the existence of various scales, including the “on a scale from 1 to 10” type for subjective assessments, might make the idea of spectra even less remarkable and the obsession with them the more so. True, such a scale is often discrete, in that e.g. someone might answer 3 or 4, but not π, “on a scale from 1 to 10”, but they still are a very different thing from a strict binary choice, and the jump from e.g. 10 potential ordered states to a continuum is small in a context like psychometrics.

Even with mental issues and complex traits, such scales might have a long history. I wish to recall e.g. how Eric Berne related a discussion with a patient who described himself in terms of being just a 73-percent jerk today (or some such; the number used, in particular, might have been something very different). Whatever book this was might have been from the 1970s, give or take. This while e.g. subjective scales are common to gather data on less complex traits, and far further back than the 1970s, e.g. in the form that a survey taker might be asked whether a certain claim applies to him to a very high, high, low, or very low degree.


This came with a fundamental realization that few characteristics are truly binary—and it came a long time ago. I encountered fuzzy logic for the first time in a popular-science magazine in the 1980s, likely in my early teens, and the field was decades old even back then. That an allegedly intelligent and educated adult of 2024 would be ignorant of even the idea of fuzzy logic, and thus have been “protected” from the immediate analogy with spectra, seems unlikely. Vice versa, those who are aware of fuzzy logic almost must view the idea of spectra as trivial.

Excursion on autism examples and choice of words

I use autism repeatedly, because it is the currently most famous and, maybe, first significant explicit use of “spectrum” in the senses involved here. (As opposed to e.g. a spectrum of color, which has a much longer history.) This to the point that I have seen uses of “spectrum” to imply specifically the “autism spectrum”, even in discussions where a context of autism was not clear—and have used it in the same sense, myself, on some few occasions. A particular oddity of formulation (again, one that I have used, myself), is to speak of someone “on the spectrum”, with the implication that this someone deviates from the “neurotypical end” of the spectrum in a noticeable manner. As the spectrum seems intended, however, even the neurotypical are on the spectrum, just close to one of the ends. (With the secondary reservation that the word “end” might be misleading for some spectra or one of the directions of a given spectrum, on a “there is always a bigger fish” basis.)

A secondary concern is that there are strong signs that I am myself an Aspie or otherwise “high functioning”, which make such examples more natural.

I utterly reject the nonsense that is “people first” language, be it with regard to autists or any other group: It is illogical, linguistically nonsensical, and demeaning and patronizing towards both those referenced and those listening/reading—and I view it as both offensive and ignorant.

Excursion on older moves away from spectra and an aversion to the old

An interesting move in the other direction is that spectra or approximate spectra that used to be popular might today be derided. Consider the division stone/bronze/iron age, which refers to rough areas on a spectrum. (This with or without further subdivisions. Note that further subdivisions show the spectrum aspect more clearly.)

Here, unlike with the newly popular spectra, there seems to be an understanding that a single spectrum is insufficient. In a next step, this raises the question, if and when these newly popular spectra will become derided for being too limited. The true issue might be less one of spectra and more one of “new things good; old things bad”, by which an old lack of a spectrum and an old use of a spectrum are equally to be derided as “unenlightened”. In a twist, this “new things good; old things bad” attitude appears to have a long history and is, it self, an “old thing”.


Side-note:

A potentially overlapping, but off topic, issue is that the application of categories like stone/bronze/iron age can make (or be construed to make) some groups, in some sense, look bad. (Cf. an above side-note on topics like “We must never, ever say anything mean!”.)

A particular embarrassment might be when someone naive realises how developed even a neolithic Eurasian society could be, with e.g. large buildings and small cities—and then realises that this-or-that lauded (sub-Saharan) African society, maybe even some lauded “New World” societies, would have to be classified as neolithic or close-to-neolithic (if this scale was applied; and even discounting the issue of materials). This while some groups were indisputably still paleolithic at the time of first contact with Europeans.

Here there might be a clue to a different explanation: Putting autists on a spectrum with neurotypicals sends a message of “We are all equal!”, while a stone/bronze/iron spectrum might do the opposite.


In a bigger picture, a move away from e.g. stone/bronze/iron age might well be a good thing, as it is not only limited and limiting, but, in my understanding, was more intended to classify and date artifacts than to compare civilizations and societies. The problem is that later “We are so enlightened! Yay us!” attitude, which is usually based on a flawed understanding of earlier ideas and/or the level of insight around them—and often on an unduly optimistic resp. pessimistic view of younger and older generations.

It might well be that this-or-that deficiency has only become clear over time, as knowledge and understanding has grown, but there seems (a) to be a strong modern tendency to underestimate even the original flexibility of an idea (and level of insight of its original proponent), (b) to underestimate the subsequent development of the idea, (c) to entirely fail to realise that the “enlightenment” and “revelations” of the last decade-or-so almost always reflects what science said long before that—if the “revelations” are, at all, correct. I am sometimes reminded of Columbus, as an analogy for the modern “enlightened” Leftist, and the portrayal as a single genius who fought for the idea that the Earth is round, while, in reality, this roundness was established knowledge and Columbus fought for the incorrect idea that the Earth was much smaller than it actually is. Columbus and the modern Leftist, then, match Hazlitt’s view of Keynes, that what is original is not true and that what is true is not original.