While I often complain about the mistakes of thought, estimation, or similar made by others, I make no claim of own perfection. On the contrary, I often make mistakes of my own. (The main difference to many others might simply be that I am aware of my own fallibility and behave accordingly, including that I am more open to revising my opinions, more likely to check my facts, more likely to double check some line of thought, and similar.)
Below and over time, I will gather some particularly interesting examples of own mistakes.
Many of the discussed mistakes arose in my childhood. This is, in part, because I was more prone to error back then; in part, because the mistakes of a child can be easier to explain to others, as they presuppose less specific knowledge than many adult mistakes. (A programming error, e.g., might not be understandable to someone with little knowledge of programming without disproportionate explanation. Nevertheless, many other pages on this website include mentions of own adult mistakes.) To boot, it might well be that own mistakes are easier to recognize the further back they are. However, even the childhood mistakes are interesting (or, maybe, interesting-to-me), because (a) they can give some insight into what types of mistakes might be made when, (b) many seem to make very similar mistakes even as adults.
A note on a lengthy excursion around business/economics/whatnot: Revisiting this page months after the original publication, and after the addition of some new entries, I find myself embarrassingly uncertain what criteria I used to put some examples in that excursion and others in the main text. Take the division with a grain of salt and feel free to enjoy the fact that a text about mistakes contains mistakes of its own. (I will probably save excursions for more tangential material in the future.)
As a child, I participated in a quiz where one of the multiple-choice questions asked how many baskets of bread and fish had been left after Jesus fed the hungry masses. I reasoned that his power was very great and, therefore, the largest number must be correct. (This, obviously, was long before I became an atheist.)
This did not give me the correct answer, however.
Even accepting the premises of Christianity, there were at least two problems with my reasoning:
Firstly, no matter how many baskets had been left over, the quiz makers could always have picked a larger number. Based on the twelve that were, they could easily have picked thirteen; had it been thirteen, they could have picked fourteen; etc.
The version with twelve was the more common in my religious exposures. However, the Bible contains (at least) two such events, the other leaving seven baskets. I cannot say with certainty which of the two was used for the quiz, but the same principle holds for both events/numbers.
A more insightful quiz taker might even have speculated that the quiz makers would have made the right answer the highest with a sub-average probability, to counter exactly the possibility that quiz takers preferred the highest number. (Ditto, the lowest.) Then again, a more insightful quiz maker might have compensated for the possibility that the quiz taker would be aware of that possibility. Etc. Meta-arguments of this type can be dangerous.
Secondly, more baskets might potentially point to more power, but they would also potentially point to less precision, insight, or similar. By analogy, would we view a caterer as better or worse, if he provided far too much food? If the feeding had left more baskets than it did, this would arguably have been less impressive. If we accept the Christian framework, I might even suggest that twelve baskets were left because the near-future saw some need for exactly twelve baskets, be it for a future meal, to convince someone about some point, to be symbolic, or whatnot. (Also note that the number twelve might have had some particular significance in the context. What this would be, I do not know, but I note that there were twelve apostles, that the number twelve occurs fairly often in human conventions, and, likely overlapping, that twelve is the smallest number divisible by two, three, and four.)
I once heard the claim that Hamburg and Liverpool were on the same latitude (specifically, 53 degrees north).
My immediate reaction was “That cannot be true!”. I opened an atlas—and found that the claim was indeed ... true. (At least, with some rounding and within the tolerance of a brief eye-test relative the drawn latitudes of 50 and 60 degrees north.)
Looking closer, I found my mistake to be the accumulation of three errors:
Liverpool was further to the south within England than on my mental map. (I had it placed towards the northern reaches of the Irish Sea, while it is actually towards the southern reaches.)
Northern Germany was further to the north relative northern France than on my mental map (and southern England is, of course, slightly north of that). More specifically, Belgium and the Netherlands shot up more in the northwards direction than I thought, which indirectly affected my view of Germany relative France (and, thereby, England).
The latitude lines had a much stronger curvature (relative the implicit left–right lines on the flat page) than I had anticipated. For instance, the line for 50 degrees north approximately touches Cornwall, cuts off a small part of France, and digs down to slightly south of Frankfurt am Main (!) in Germany.
Looking at the underlying causes, this is likely mostly a mixture of problems caused by projecting something spherical onto something flat (which leads to counter-intuitive results, especially when moving from more local maps to larger global ones) and how many maps focus on a comparatively small part of (in this case) Europe.
The former is complicated by how often maps lack lines for latitude (and longitude; note that these lines do not necessarily bring much value on many maps, depending on e.g. the purpose of the map, and that the issue is an indirect effect on intuition). The latter include maps of only one specific country or only some adjacent parts of two or three neighboring countries.
In the overlap, orientation and centering of a map becomes important. The map that I currently view, e.g., is centered on the 20 degrees east longitude (which, in a now natural choice, is oriented to have north “up” and south “down”). As a result the 50 degrees north latitude is parallel to the “left–right” of the page at Krakow in Poland (where, approximately, the two intersect). If I look at the 0 degrees longitude (Greenwich), it has a noticeable tilt relative the “up–down” of the page. If, however, the page was re-orientated to have the 0 degrees longitude match “up–down”, the perspective would shift considerably, and the lines through Krakow would now have a corresponding tilt. (We would also see Krakow higher on the page than Cornwall, while Cornwall currently is higher than Krakow.) For natural reasons, smaller maps, e.g. Germany-only, Poland-only, whatnot-only, tend to have the longitude that corresponds to “up” somewhere in the middle of the country at hand, and those who view such maps in isolation might get very wrong ideas about how a larger piece of the world might look when projected onto a flat surface. (On a globe, in contrast, all longitudes go “up” and “down” without any longitude being “preferred”, while all latitudes are “left” and “right”.)
The exact behavior of latitudes and longitudes relative up/down/left/right and similar depends on the exact projection used. I have not investigated how the various maps that I have seen over the years handle this, but other maps can have “straight lines” at the cost of distortion elsewhere, notably distances. If my exposure to different types of maps have been different in different situations, this could further explain a “geographical intuition” gone wrong, including an underestimate of the curvature on one map because of experiences with another.
(Various projections have different strong and weak points, and maps used for different purposes or with different preferences can make radically different choices.)
Above, I note how the makers of a multiple-choice quiz could always pick, as one of the false options, a number of baskets even larger than the true number, no matter how large that true number was. This leads me to a very interesting family of errors involving flawed reference frames and, especially, reference frames that are too much based on oneself and/or contain an element of (something analogous to) begging the question.
One sub-family is when I measure the difficulty of a task against how hard I find it and, in a next step, estimate my own ability more highly the harder I found the task. (Provided that I ultimately succeeded.) A trivial example is picking something up: If it feels light, I think nothing of it; if it is so heavy that I have to exert myself, I consider it heavy and come away with the feeling that “I must be strong, because I could pick up something heavy!”. However, the reason that I thought it heavy might have been that I were weak. Likewise, that I thought some other object light could have been because I were strong, in which case I might have been wrong in thinking nothing of it.
A similar idea is illustrated in an episode of “Friends”, where Chandler and Ross arm wrestle. They go at it for a very long time. Someone remarks “They must both be really strong!” and someone else notes “Or really weak.” (in both cases in an approximate paraphrase). Indeed, such a contest, whether long or short, can only tell us something about the relative strength of the two opponents, not of their absolute strength. (And even this only if we assume that other factors are sufficiently equal, e.g. technical mastery.)
With so basic tasks like lifting something, I am usually sufficiently aware of the issue that I can avoid problems; however, this does not necessarily apply to more complicated issues—if in doubt, because I rarely stop to think of the matter. (Note that I only very rarely have an explicit thought of “Yay me!”. Instead, usually, it is a matter feeling content in a vaguer and more unconscious manner. The following is a more “explicit” issue and more of a problem.)
Worse, another sub-family involves having an informal reference frame of what is hard, easy, whatnot, based on own ability, and then finding that this reference frame works poorly with others.
For instance, I have had countless situations where I have seen something (e.g. a conclusion based on some premises, the solution to a problem, the existence of a particular risk) as obvious, because it was obvious to me, but where the counterpart has failed to understand the matter unless being led by the hand—for which I rarely have the patience. (To boot, there are potential complications like a failure-to-understand going undiscovered.) In the office, especially in a weaker team, I have often come away feeling like a Cassandra.
For instance, in my dealings with various civil servants and customer-service workers, it often seems to me that the counterpart has problems with comprehending a perfectly ordinary text or understanding even the most basic reasoning. Moreover, these groups do, on average, have far lower IQs, education levels, whatnot, than even the “weaker teams” from the previous paragraph. Likewise, if I look back at my school years, the proportion of students that had problems in the recurring tests of reading comprehension was considerable—and chances are that this proportion overlapped strongly with future civil servants and whatnots. (Ditto, more generally, those who cause occasional headlines about how many students read well behind their “grade level”, otherwise struggle in school, and similar.)
I have spent most of my “team time” with fellow software developers. While the level of software developers varies widely, and while there are some genuinely dumb specimens among them, it is not unusual for a perceived-as-dumb-by-the-rest member of a team to be well above the population average, to fare much better by the standards of other groups in the same company, to have grown up as the “clever one” among his siblings or close childhood friends, or similar. Make the same guy a civil servant or customer-service worker and he might compare favorably to his new colleagues.
An interesting specific example in the overlap is a reading of Hemingway’s “Hills Like White Elephants” in an adult (!) reading group. (Specifically, in the context of an employer-organized English course.) We might have been half-a-dozen, including a native English speaker, who served as the teacher. Not only was I the only one who realized that the topic of the in-story discussion was an abortion, but the others, teacher included, outright denied the possibility. (Along the lines of “I don’t know what it is about, but it is not abortion”.)
Even the teacher aside, the level of prior exposure to English was sufficiently high and the text, in terms of e.g. difficult words, was sufficiently easy that I do not see “English as a second language” as a plausible explanation—it seemed more a matter of an inability to think properly about what was read. Unlike with, say, a plain business letter that is not understood, there are some textual difficulties on another level, as the text deliberately brings an idea across in an indirect manner; however, these difficulties are by no means so large that all-but-one of a group of highly educated persons should fail at deciphering it. To boot, they did not just fail to arrive at the right interpretation on their own—they outright denied its correctness when it was proposed by me.
(I also suspect that deciphering the topic is a prerequisite for even attempting to understand other aspects of the story, like character motivations and what the decisions might ultimately be made, which would make it likely that the author considered the deciphering sufficiently easy. However, I have not re-encountered the story since then, around 2000, and my memories are too vague to say for certain.)
For my own part, a conclusion is that I must pay greater attention to what others might or might not understand (etc.), but this is very hard for me to keep in mind (a potential topic for this page in its own right).
In a bigger picture, a greater amount of ability testing might be needed, and this especially for often neglected positions, e.g. customer service (often hired to be as cheap as possible) and civil servants (often hired among those who could not find more challenging/rewarding/whatnot career paths, often promoted based on years of employment instead of competence, often virtually unfirable, etc.). Note how those in such neglected positions can do their employers quite a bit of harm, e.g. through the need to escalate solvable problems and through damaging a public reputation; and can do immense harm to the customers/citizens/whatnot, through enormous wastes of time, delays in performance, incorrect handling of complaints and applications, and so on.
One aspect of the issue is what causes different individuals to have different standards. Often, it is a matter of the ability to think and reason, to solve problems, to draw conclusions, whatnot, on a reasonably generic level. However, other factors can play in (notably, domain-relevant knowledge and experiences) and it is not uncommon for the same person to, e.g., see something as obvious after a few years that he did not see as obvious when a beginner. Forgetting that different causes can exist is potentially dangerous, especially, when it manifests in authority arguments (“I have ten years of experience! I am right!”) that fail to consider what difference a greater or lesser ability to actually think and reason (etc.) can make.
There is even some scope for competing ideas of what is obvious, when different persons have different priorities, focus on different aspects of an issue, or similar. (However, in my experiences, such competing ideas are much more likely to go back to one or more of the parties simply being wrong.)
On some few occasions, I have failed to “officially” reach a (correct) conclusion that I had reached “unofficially”, because I was afraid of taking a leap. The two most notable examples (both from my school years):
At a very early stage in learning German, I was confronted with the word “Kommunismus” without translation and textual context. I was virtually certain that this corresponded to the Swedish word “kommunism” (and, of course, the English “communism”), but I was stuck in doubts about the unfamiliar “-us” ending—and chickened out to the point that I redundantly asked the teacher for assistance.
From a more adult perspective, of course, I am well aware both that “false friends” exist and that coincidences happen. Here, however, little true room for misunderstanding was present. If in doubt, school materials tend to point out such issues when they do occur.
The example also illustrates how differences between related languages tend to have some regularity (and how borrowed words can underlie a similar regularity, even absent that relatedness). Comparing German to Swedish, nouns are capitalized in a blanket manner in the former (which I knew) and German tends to use “-ismus” where Swedish uses “-ism” (which I did not know). Comparing English to Swedish, there are a great many corresponding words that are written with “c” in English but “k” in Swedish. Some originally Greek words are particularly interesting, e.g. “centaur” (English), “kentaur” (Swedish), “Zentaur” (German), where we also have a difference in pronunciation (“s”, “k”, respectively, “ts”).
First doing vector calculations in physics, I was faced with various force vectors, where it was “obvious” that we were supposed to use the same trigonometric connections that apply to physical distances. I was too reluctant to take the leap and, again, chickened out.
However, here my justification was larger, because the simple analogy between force and distance is not a mathematical proof and a more stringent treatment of the matter in school would have been welcome. Here an aspect of “meta-reasoning” is even more important than in the previous item: if the “right” answer had not been as simple as applying trigonometry, the teacher would have been bound to discuss the matter in advance.
Had I asked the teacher for some type of proof of the connection, I would have been outright correct in my stance. Unfortunately, I did not.
(In the other direction, such “meta-reasoning” can be dangerous in that the student could be lead to give an answer based on insight into the way that a teacher thinks, a textbook is written, or similar, instead of insight into the actual matter at hand.)
However, on more occasions, I have avoided making an error through not taking an incorrect leap. There are even more cases when I did not take a particular leap and, unlike above, have no way of telling whether I was right or wrong in the absence of the leap. (Including cases of what someone did or did not mean when we talked—an area made more complicated by a leap not necessarily resulting in an action, e.g. because the action could be rude or come with some risk, and the action being necessary to know the truth.)
An interesting off-topic question is when and whether taking a leap can be a good or a bad idea more generally, e.g. when the absence of the leap is a matter of habit or convention, rather than e.g. lack of courage. For instance, I moved to using a hair clipper/trimmer/whatnot to cut my own hair long ago. For years, I stuck with cutting my hair shortish, as opposed to as-short-as-the-machine-allows, because this was closer to the results of a “traditional” haircut. Then I bought a new clipper and the, maybe, second time that I used it, I forgot to take the differences to the predecessor into account—and accidentally went as-short-as-the-machine-allows with the first swipe. I saw myself forced to continue the haircut in that manner, lest I look ridiculous, and was sufficiently happy with the results that I have deliberately gone with as-short-as-the-machine-allows for all later haircuts. (Both machines had a slider to regulate how much hair was left. With the first machine, the slider regulated the positions of the blades and it went far enough that I could get that shortish hair without what appears to be called a “guard”—those things that look halfway between a comb and a rake. With the second, the slider regulated the position of the guard, had no effect when no guard was present, and, then, always cut as-short-as-the-machine-allows without a guard. That second time around, I went guard-less by force of habit.)
It is now 2024-12-21 and I am doing a bit of reading about Christmas. Within a few minutes I was reminded of a child’s mistake and first discovered an adult mistake, both relating to German and/or German-speaking countries:
When I was a kid, I encountered (an unillustrated version of) the tale of “The Nutcracker and the Mouse King” or one of its derivatives. The nutcracker in this German story is one of the approximately human-shaped nutcrackers so popular in Germany, but much rarer in e.g. Sweden. Indeed, at the time, the only nutcrackers that I knew consisted of two metal levers attached by a hinge. This made the story very surreal.
More interesting is the adult example: I just learned that “Käthe Wohlfahrt” was not a charity named “Käthe” but a commercial business named after a woman by the name of “Käthe Wohlfahrt” (who died as recently as 2018). This business specializes in Christmas decorations and whatnots, and its name is very often encountered at e.g. “Christmas markets” (“Weihnachtsmärkte”).
When I first became aware of the name (likely, in 1997, my first year in Germany), my German was weak and I sometimes jumped to incorrect conclusions—as I did here. (Note the interesting reversal of the above issues around the word “Kommunismus”.) In particular, we have the German word “Wohltätigkeit” (“charity”) and both the Swedish “välfärd” and the English “welfare”, with their strong associations with a “welfare state” and related phenomena. As, further, Christmas is widely considered a time for charity and is a time when charities are unusually active, I assumed that “Wohlfahrt” was another name for or related to charity and that the business was a matter of sales for charity—buy one of those German-style nutcrackers (or some other product) from the right booth at the Christmas market and see a portion of the proceeds go to the needy.
This mistake was quite understandable in 1997. Less understandable is that it took me some 27 (!) years to discover my error. I can only speculate about the reasons, but I suspect that I simply never reflected over the name in light of later and improved knowledge of German.
The failure to reflect was likely aided by how I find Christmas markets fairly uninteresting and have never bought much in terms of Christmas decorations from any source.
A twist is that “Wohlfahrt” and “välfärd”/“welfare” are not false friends, although “Wohlfahrt” might be a bit rarer. The meaning is approximately the same, and I would have been better off assuming that it was just that (as opposed to something merely related).
A similar error from 1997 or 1998 involves the architect and artist Hundertwasser, who was previously unknown to me: Darmstadt, where I studied at the time, held a long-lasting and well-advertised exhibition around his works. When first encountering trams with a big display of the name “Hundertwasser” on their sides, I assumed that it was some event or hundredth (“Hundert” = “hundred”) anniversary that carried that name. (This error I discovered in a much more timely manner, within days or, on the outside, weeks. To boot, “Hundertwasser” was a taken name: He was born with “Stowasser”—a name far less likely to cause confusion, even among the newly immigrated.)
When I was very young, my maternal grandmother, in my company, brought a toy pram of some sort from her house to my mother’s apartment—which brought on a very illustrative case of faulty reasoning:
Due to the time passed, I am very uncertain about the details, including why. (In particular, I do not recall whatever happened to the pram afterwards, but it had once been a toy of my mother’s.) What is important, however, is that this was a pram made for a little girl to push a doll, as opposed to for a grown woman to push a baby.
I am uncertain whether my below mistake was genuine or an excuse to, say, avoid the risk of being spotted by someone who might recognize me while I was pushing a toy pram. However, this applies to some degree to adult errers too—it might be that they merely pretend to believe something in order to fool the unwary. The ability to spot the error is then required in the intended victims instead.
While I largely speak of just a single pro and a single con below, this is strictly for convenience of illustration. That any given action/decision/whatnot has only one of each is unusual. Likewise, note that claims around pros and countering cons apply equally, after trivial modifications, for cons and countering pros.
While my grandmother was small, the handle was still uncomfortably low, and she wanted me to push the pram for her, reasoning that I was the smaller and that it would be easier for me than for her. I protested that I had shorter arms, which should even things out.
Here we had one pro (my being smaller) vs. one con (my having shorter arms) that, of course, did not even things out. Arms are shorter than the overall body and, assuming the same body proportions, what makes for a comfortable “gripping height” will scale with body height, because body height increases faster than the length of the arms. This even when we adjust for distance between shoulders and top of head. (Which makes for a more relevant measure, but which I did not consider at the time—an example of how important it can be to pick the right sets of pros and cons for a comparison.)
The underlying error is a failure to quantify or, at a minimum, to make a larger-than/equal-to/smaller-than decision, in that if a pro might have an effect of X, we cannot assume that a con will have an effect of Y=-X (leaving a net of X + Y = 0). Instead, we have to get at least some idea of the size of X and Y. Above, e.g., we might have had an X of 20 cm and a Y of -10 cm, for a net of 10 cm—not 0 cm.
Where I use a factor of 1/2 strictly for easy calculations. The true value is likely to be similar-but-different. A more realistic comparison would also have to consider differences in body proportions, height of shoes, and similar.
Of course, in so simple a case as above, the best way is to simply measure the net result (here, to see whose hands land at what height) and/or to make a practical test (here, to see who can hold the handle of the pram with what degree of comfort). In more general cases, this will not necessarily be possible—especially, when an estimate of future results is necessary and no direct measurement can be available at the time of estimate.
Such a failure to quantify is very common among adults, including the likes of politicians and journalists. (And it does appear to be more common with women, well in line with the idea that they are comparatively weaker in quantification than men; however, my own impressions might be too subjective or anecdotal.)
A particularly interesting family of variations is the idea of metaphorically making up on the roundabouts what is lost on the swings or vice versa. (Cf. the associated saying.) The core meaning is that a business loss somewhere is made up for by a gain somewhere else, but an extended use can include many other cases, say, a tax increase allegedly being offset by a handout. (Of course, if the handout were to perfectly offset the tax increase, why would anyone bother with the increase in the first place? In reality, the handout is often a politician’s way to keep protests down when the tax is increased and/or to give some voter group preferential treatment over the rest.) Not only must the losses on the swings and the gains on the roundabouts be quantified for a fair comparison, including to decide whether the losses are tolerable in light of the gains, but great care must be taken to not consider avoidable losses tolerable because they are outweighed by gains.
To consider literal swings and roundabouts: If the swings bring in customers who (a) also use the roundabouts to such an extent that the loss is more than offset, (b) would go elsewhere without the swings, the swings can, indirectly, bring a net value and it might be a good business decision to keep them. If one of (a) and (b) is untrue, it might be better business to skip the swings and just enjoy the profits from the roundabouts.
Another is to take a situation where apples and oranges are compared and to, so to speak, take away an orange in exchange for an apple and to expect/hope/demand/whatnot that the other party will be content. This includes many situations involving parents and children, say, in “No, you may not watch that scary film on TV, but I will give you some ice cream instead!” scenarios. These are in so far tricky in that a quantification in a strict sense is not always possible, but something akin to it often is, e.g. in that the child above knows whether ice cream does or does not outweigh missing the film. (Also note “utility functions” in Economics. I stress that I make no statement about what methods of childrearing are recommendable—that is a different topic entirely.) They also have the benefit of demonstrating that quantification on behalf of someone else is tricky, because the preferences of different persons can be very different—one of the reasons why it is important to allow each individual to make decisions for himself to as high a degree as practically possible (and, with children, within what maturity/judgment/whatnot makes allowable).
The above is not to be confused with someone merely pointing out that there is a con and that this con must be considered. The key is the (almost always) faulty assumption that pro and con cancel each other, that, so to speak, Y = -X.
Pointing out such cons is very legitimate, even if one cannot or does not quantify them, because the failure to consider a con at all is a quite common error in the other direction, e.g. among politicians who do not understand game theory and that, say, income from a new or increased tax might be partially offset by changes in behavior. Instead of mistakenly taking X + Y = X - X = 0 to be the net result, someone equally mistakenly just takes X to be the net result, while ignoring Y.
There are at least two cases of naivety regarding business/economics/whatnot from my childhood that can be somewhat enlightening, but which I would view as off-topic. I include them mostly because of how many others, including many fully adult and highly educated, still seem to have a similar naivety.
In both cases, chances are that longer thought would reveal even more reasons for why my early positions were naive.
Childhood examples from other areas are plentiful and I try to limit myself to what is on-topic (in the main text) and what is truly interesting or important for other reasons (as in this excursion).
For instance, a personally interesting, but off-topic and not generally interesting/important, example is how I, as a very young child, denied the possibility that “dog” could be English for the Swedish “hund”—after all, the two words do not have the same number of letters. This, and the implication that I saw translation as a matter of transliteration, might be interesting to an expert on childhood development or a psycholinguist, for instance, but is irrelevant to my current purposes. It is, in particular, not an error of thought in the manner exemplified by the childhood quiz above—I simply had no knowledge of any language but Swedish. (A charge of “jumped to conclusions” holds, but lies on a different dimension and is unremarkable in a young child. Sufficient or sufficiently deep thought might have revealed this jump to be preposterous, but it would have required much more than for the quiz, would have been less reasonable to expect in a child, and might still have failed for want of a sufficiently developed worldview, even had the child had enough ability to think.)
When I was very young, I had a brilliant idea from reading my “Donald Duck” comics—I would invent a machine to create money so that the likes of the Beagle Boys would not have to steal!
I was very disappointed and surprised when my mother informed me that this would be illegal, but from an adult perspective:
Such machines obviously already exist: money does not grow on trees and while making money manually might once have been an option, it would be impossible to keep up even with the quantities in play back then (1980, give or take).
The issue, then, is not to have the right machine(s), but to what degree they are used. There might be situations where the government cannot keep up with printing money, but it is rare and the opposite problem has been a far more common issue where ever the connection between money and value has been weak (minting more gold coins requires more gold; printing more bank notes, more paper and ink; increasing the value of a digital account, pressing a few buttons).
Chances are that I had no awareness of non-physical money at the time, but such money obviously already existed and has increasingly been crowding out physical money. In due time, physical money might not even exist any longer. (The view of a future reader might be extremely interesting.)
When it comes to governmental (central bank, whatnot) policies, “printing money” and its variations are usually best viewed as metaphors: Physical money is still printed, but it is more a matter of having enough cash in circulation for cash-based transactions than of increasing the overall money supply. Actual increases of the money supply, in turn, tend to go other roads these days.
Unauthorized printing could lead to enormous problems, including a disastrous loss of value/confidence and the risk that too many would print for their own gain (instead of performing honest work). (Not to mention some practical complications like how to handle serial numbers.)
While I cannot rule out that I had some idea of printing for my own benefit, I have no recollection of this. My discussion with my mother definitely centered on helping others.
Generally, incentives would be perverted, even if money handouts only came from a small group of entities, say, me and the government. (As opposed to the above scenario of large scale printing for one’s own benefit.)
If the Beagle Boys get money from me without having to work, why should so many hardworking citizens still be stuck in offices, in factories, on farms, whatnot? Why should they not simply throw up their hands, refuse work, and insist on a handout? Alternatively, why not claim that “I am a crook, too! Give me my handout so that I can go straight!”. (Some might continue work through a passion, like Gyro Gearloose, who was a great inspiration for my wish to be an inventor. These, however, are bound to have been the exceptions.)
Then we have the question whether the Beagle Boys actually had to steal (even as things were)—dumber and/or physically weaker characters were regularly shown as being gainfully employed. Why, then, should they not work?
The answer is that they preferred a life of crime to honest work, making a reward through newly printed handouts a great unfairness. Worse, I strongly suspect that they would have continued a life in crime for the sheer heck of it, even had I been there with my money machine to give them handouts. (The first certainly applies to many real-life criminals; the second, likely, to at least some.)
(Executive summary: if a bank or similar entity does not receive interest, why should it lend money at all?)
Some years later, I was greatly surprised to hear that someone who borrowed money from a bank was supposed to pay back more than he had borrowed—which seemed very unfair to me.
In practical terms, we have interest, but the exact actual modalities might have been different. In my vague memory of the event, it was a case of borrowing some amount and paying back a greater balance at some later date; however, this could be the result of a faulty memory. The modalities and formalities do not matter very much, however; what matters is the principle of more money coming in than went out. A Muslim bank, e.g., might be forbidden to charge interest in the strict sense, but the principle still holds, even be it through a workaround.
(The source was, I believe, an episode of “The Little House on the Prairie”.)
Again, from an adult perspective:
Lending money comes with risks, including that the debtor cannot pay back the full amount in a reasonable time frame (or at all) for lack of income, that the debtor dies, that the debtor is dishonest, that unexpected developments gut the value of the currency, and that an own cash-flow problem causes the bank to fail (even should all borrowers be able to pay back on schedule and in full; however, “bad debt” is probably the more likely cause of failure).
Interest compensates for these risks, e.g. so that the default of one debtor is covered by the interest paid by other debtors. This removes a great disincentive to lend money and reduces the risk that a bank outright fails—which could have disastrous consequences for others.
Even short of a gutted currency, continuous inflation can hollow out the value of a certain sum. What if the bank lends an amount with a purchasing power indexed at 100 today and, a year later, receives back the same amount at a purchasing power of 98?
Here, interest does not just remove a great disincentive but also a great unfairness.
Lending money comes with an opportunity cost, most notably that the money cannot be invested in other ways. By lending money to someone, the bank loses the profit that might have resulted from such other investment.
Again, interest removes a great disincentive (and, depending on point of view, unfairness).
A bank has costs of various kinds to cover and would usually wish for a profit. Staff costs money, locations cost money, safes cost money, etc. Possibly, most importantly: if depositors wish for interest on their deposits, this costs money. What is the most natural way for the bank to get this money? Lending at an interest. (Or, per the previous item, investments of other types that stand in competition with lending.)
Here, interest creates an incentive to lend, and, maybe, one without which the bank could not exist in the long term.
By no means do I deny that many banks use dishonest and/or unfair practices. This is a problem, but it is not a problem that inherently has anything to do with interest on loans or legitimate concerns (e.g. risks and opportunity costs).
A constant issue with human interpretation is that we are necessarily limited by what observations, facts, whatnot, we actually can draw upon at the time of interpretation. (For simplicity, I will just speak of “observation[s]” below.) Yes, we can speculate, inter-/extrapolate, use deduction, whatnot, beyond these observations; however, that too is an interpretation of sorts—implying that we are still limited by observations. To boot, these observations can themselves be misleading, e.g. because we draw on the memory of some past observation or because some observation (say, a mirage) is not what it seems.
Here, I ignore the issue of what might conceivably be legitimately deducible without any observations—a point where philosophers and other thinkers have had radically different opinions over the years. (Including for reasons like uncertainty over even what deductive rules can be considered known on what grounds.) While I do not entirely rule out the existence, I belong to the skeptics and suspect that apparent cases simply go back to a lack of awareness of what observations played in (and/or of other complicating factors).
I equally ignore complications like “maybe we live in the Matrix and all our observations are false”.
While this is bad enough, the observations available to us usually move close to the surface—especially, when humans are concerned. We often have a very incomplete view of what events and circumstances are behind a certain behavior and we almost never have a true insight into the internal processes of someone else. Instead, we rely on what might be called surface signs.
The unsurprising result is that we often misinterpret. It might even be argued that even a correct interpretation is only correct to a very rough approximation.
A particularly telling own example of complete misinterpretation is my reaction to Matthew Perry’s (“Chandler Bing’s”) drastic weight loss during the Chandler–Monica proposal episodes that ended resp. began consecutive seasons of “Friends”. During my first watching, I assumed that Perry had been sufficiently troubled by his unseemly weight gain that he had used the season break to deliberately lose weight (maybe, by going to a “weight-loss clinic” or a “fat camp”) and get back to the sportier impression of earlier years. My main reaction was a “good for him” and I was actually a little sad to see him rapidly put on weight again as the season progressed. My one original reservation was that he seemed to have overdone it, being a little too thin and somehow having (temporarily) screwed up his voice, but I simply viewed this as over-ambition and lamented that the beauty standards of TV were too exacting, making him lose fat too fast, with too drastic methods, and/or in too large a quantity, moving beyond “lost unhealthy fat” to “lost healthy fat”.
This scenario of deliberate weight loss did match the observations on screen and it did match the aforementioned observation about beauty standards. However, it was also very far from the truth. He did go into professional care, but it was for reasons like pancreatitis and drug abuse. The weight loss was for the wrong reasons and a return of weight might have been an outright good sign. (Up to some limit of weight.)
Here, my misinterpretation hinged on what I did not know and (at the time) likely could not have known, because these issue were likely not yet public knowledge. (Someone more acquainted with drug addicts and their issues might or might not have been in a better position, but that, too, involves an exposure that I had not had and observations that I had never made for want of that exposure.)
An important lesson from such examples is that some care and caution is needed, and that having a fuller picture can change interpretations drastically. (A particularly dangerous, and extremely common, mistake is to only listen to one party to a conflict before taking sides in that conflict.) Correspondingly, it is better to limit interpretation to what is sufficiently clear cut and to leave the rest as an unknown or to use a metaphorical “confidence interval”.
However, as a counterpoint, the surface signs also often give the right impression and often are what (legitimately) matters most to others—especially, when misbehaviors are involved. For instance, if someone typically shows up late to work, there might be a legitimate and exculpating reason or set of reasons, but a problem rooted wholly or partially in the person at hand is more likely. Likewise, what legitimately matters most to others is the “late to work” part, including the disruptions, additional costs, additional efforts for co-workers, whatnot, that this might cause. (That, as in the side-note below, a train is often late is not their problem.) New information might change the interpretation of why he is so often late, but not the fact that he is late, and only very rarely will the “why” be exculpating.
For an example of partial culpability, consider someone who travels by train:
If the train is often late, the original blame falls on the train company (or some other entity, with a corresponding partial culpability for the train company for failing to adapt). However, it is up to the traveller to adapt to ensure that he meets his obligations towards others. Depending on circumstances, he might solve the problem by, say, taking an earlier train, switching to car travel, or coming to some special arrangement with his employer. If he does not, he is culpable.
To just claim “my train was late” might be acceptable if it causes a delay of work on the odd occasion—but not when it happens several times a week.
When it comes to an exculpating example, I am drawing a blank, because there are only so many independent and unpredictable reasons that can accumulate while being statistically plausible. (E.g. that the train is late one day, an alarm clock malfunctioned another, a kid had a medical emergency yet another, etc. This might, with enough bad luck, explain a single week. It will not realistically and plausibly explain several consecutive weeks.) However, there might well be other cases where exculpation is possible, notably when someone is late for or misses a single-but-very-important appearance. (A potential example is my own flying mishap when attempting to go to Munich.)
What should be considered a mistake, an error, a “fell for it”, or similar, is sometimes clear, sometimes debatable.
A particularly interesting example occurred to me around 2000, in the early days of the DVD era: The German computer magazine C’t, originally unbeknownst to me, had a yearly joke article for the 1st of April. For my first encounter, that joke was a fictitious new DVD technology by which the faces of famous actors would be stored in the DVD players, the DVDs themselves would leave the faces out, and the DVD players would, by some instruction, know what face to provide from storage—and look how much DVD space we will save! (With great reservations for the exact details: The trigger for the writing is the approaching 2025-04-01, and my memory is faded. Also see a below side-note.)
My spontaneous reaction was that this was utterly idiotic and impractical—but it never occurred to me that it could be a joke. Over the next year or two, I noted with satisfaction that I had never heard one more beep about this utterly idiotic and impractical idea—reality was proving me right! (I thought.) Ultimately, I forgot all about the article. Then, years later, I found a retrospective article in C’t, dealing with past April fools’ jokes, which listed that very article.
Now, did I fall for this joke? In the sense that I took the article seriously and failed to recognize it as an April fools’ joke, I did. However, I did not fall for it in the sense that I believed the idea to be even remotely viable. (While C’t has a history of unusually high quality, there was still a matter of journalists to consider, who might well have failed to see obvious objections—and of business graduates who could have some “brilliant” idea for a technology that they might be pushing despite its severe shortcomings.)
A key question might be whether I had any reason to suspect a joke, instead of just incompetence. Well, clearly, the vast majority of apparent journalistic blunders (and, m.m., other cases of incompetence) actually are blunders or, sadly, manipulative disinformation. Outside of April fools’, and some magazines specializing in humor/satire/whatnot, articles are intended to be taken at face value and it would be both ridiculous and counterproductive to even contemplate a joke under normal circumstances. C’t is not a humor specialist, which then leaves the question of April fools’. In a daily paper, an April fools’ article would, for obvious reasons, virtual always appear on the 1st of April. C’t was, at the time, a bi-weekly magazine (a monthly one, just a few years earlier), and the 1st presumably fell at some point during the corresponding two weeks. I might then have read the article quite a few days before or after the 1st. At an extreme, there might even have been a constellation of “more than two weeks after”, if the 1st was early in the two weeks, I was late to buy, and it took me some time to get around to the article at hand.
In a bigger picture, I advice against April fools’ jokes in papers and magazines of a normally serious character. The consequences if someone does fall for the joke can be both unpredictable and dire, and that “if” often depends on awareness of the day, which, in turn, often depends less on the person at hand and more on his surrounding. It might, e.g., be that the one adult has children who play him an April fools’ joke before he reads the paper at breakfast, while the other does not. Clearly, the former is more likely to have an awareness of April fools’ when reading than the latter. (Such awareness is the more important, when the contents of an article are less obviously idiotic to the reader than the above example was to me, be it because it is objectively less idiotic or because recognizing the idiocy requires a prior knowledge or understanding that the reader does not have.)
There is also the issue of reality being so absurd that the jokes can fail on too great realism—especially, where politics is concerned. A famous-in-Germany case involves an April fools’ article (probably, also by C’t) dealing with the suggestion of an “Internet tax”—and the government promptly inquiring who had leaked the plans for an actual Internet tax. (Fortunately, such a tax has not yet manifested.) Unrelated to April fools’, I note that I first heard of 9/11 when on the telephone with a colleague, who suddenly spoke of airliners crashing into the World Trade Center and the Pentagon—and I genuinely thought that he was making an extremely tasteless joke, because this sounded too absurd to be true. (It was absurd, virtually incomprehensible, but even the absurd and incomprehensible is sometimes true.)
Then again, an absurd seeming claim does not necessarily have either of a joking explanation or an absurd explanation. For instance, a few years back, I talked with a colleague who mentioned that some Egyptian ruler (likely, either Sadat or Nasser) had kept pet walruses somewhere along the Nile. I found this very odd and made inquiries, e.g. relating to how the issue of walruses vs. temperature and salt/sweet water was handled, but received answers that did not satisfy my skepticism. (With sufficient special arrangements, a walrus along the Nile would be no more absurd than a walrus in a German zoo, but his explanations did not seem plausible for a “walrus arrangement”.) Then he fell silent and something like “Eh, ahem, ‘hippos’, I meant ‘hippos’.” followed. Presumably, he had just had a brain fart and picked the wrong word in an otherwise perfectly plausible discussion. In fact, hippos and the Nile almost proverbially go together—and I would likely not have raised any objections, had he originally used “hippos”. (Sadly, however, the Egyptian portions of the Nile have grown hippo-less, which might have been why he broached the topic in the first place.)
To explain why the suggestions of the article were idiotic is tricky, because I remember virtually nothing beyond the very general idea and my negative reaction. For instance, above I write that the faces would be stored in the DVD player. If so, we have issues like how to keep the player sufficiently up-to-date over time—but, maybe, I misremember and the faces were to be stored on the DVD, just separately from the actual movie. If I do misremember, we have issues like a smaller savings relative a storing on the DVD player.
A critical point, however, was likely hardware and the computational effort needed to render faces in real time. Not only do we talk about hardware from around 2000, far weaker than today, but about specifically DVD-player hardware, which, in a typical implementation, has or had very little generic computing power and a great specialization on specific tasks relating to decryption, decoding, and whatever else is needed to turn the information on the DVD into something that (at the time) a TV could handle as input. It is a bit like taking a cellphone from around 2000 and expecting it to handle typical smartphone tasks.
Another is whether the alleged storage savings would be worth the trouble. It is true that computer games have done similar things with automatic and real-time rendering based on models for a long time. (But note both how long it took to reach current quality standards and that computer games, in 2025, are still short of perfection.) DVDs and computer games, however, have different sets of problems to solve:
A DVD movie goes through a fix set of scenes, where any and all faces are shown in the same manner during every watching. (The face of Bruce Willis at 15 minutes and 3 seconds into a particular movie is displayed from the same angle, in the same lighting, etc., whether it is the first or fifth watching, whether I am watching or Roger Ebert. There might be some slight differences due to viewing equipment, e.g., that the one TV or computer monitor renders colors a little differently than the other, but that is an issue of a very different type.) Even when movies rely strongly on computer graphics, up to and including creating fake versions of real actors, what goes into, say, a DVD or a .mkv file is (still, in 2025) a one-time generation that the player can treat exactly like the rest of the film. To boot, faces of famous actors make up a comparatively small portion of most films and a reasonable expectation in savings (had the idea been viable) would be correspondingly small—and dwarfed by the expected (2000) and realized (2025) increases in available storage space over time.
In contrast, many modern computer games go through potentially infinitely varying circumstances of display, implying that faces (and a great many other things) either have to be generated highly dynamically or be subject to compromises in realism and variability.
As with the above excursion on various money and whatnot issues, the value of my own past errors is not limited to personal insight or, say, an increased ability to avoid repetitions of past mistakes. On the contrary, they can often allow insight into potential errors by others, allow others to learn from my mistakes, and similar.
This section was triggered by a re-watching of “Jesus of Nazareth”, a few days before Easter 2025. Hence the specific angle. (Notably, this TV series is one of the works most often accused of “White washing”—especially, with an eye at the casting of a blue-eyed Brit, Robert Powell, as Jesus.)
Much of the below is written from a broadly Christian perspective, because the same discussion from an Atheist perspective (matching my own) would be a dead-end for the purposes of this section, which relate more to reasoning than to religion.
Consider e.g. Jesus and power again: There have been debates about what race Jesus might have belonged to, be it with the implication that some race would be worthier than another or (as, maybe, with the idea of a Black Jesus) that Jesus would be a particular boon or accomplishment for some specific race. Even leaving aside the question of the true nature of Jesus (e.g. whether he existed, had supernatural powers, was a newly created entity or a mere incarnation of something eternally existing), however, such thoughts are pointless—unless we assume strong limits on omnipotence (to the point that the term is misleading; I am open to such limits, but they do not reflect the typical Christian take, in my impression). It could, e.g., not just be argued that a particularly worthy race was chosen—but, equally, a particularly unworthy one. This might be because the choice made a particular point, because the race at hand had a particular need of salvation, or similar. Such a choice would certainly have been compatible with much of the New Testament (note e.g. portions of Matthew 9). Certainly, much of the Old Testament shows the chosen people behaving in a very unworthy-of-being-chosen manner, up to and including outright defiance of God’s commands and commandments.
While his race is ultimately unknowable, there is (a) nothing to be gained or lost for a particular race by having Jesus associated with it, (b) no particular reason to suspect that Jesus would have deviated from the local Jews in terms of race. (The last, the more so as there is no mention of Jesus looking un-Jewish in the New Testament and as a Jew would be most plausible in light of the Old Testament.)
To look at a more theological question, is there reason to believe that Mary was particularly worthy to be his mother? Maybe, but from a strict “power of X” perspective it is hard to make a case. If we, e.g., postulate that Mary had to be particularly pure in her own nature, we also implicitly put a limit on the divine power. The truly omnipotent would have been perfectly able to use a truly disgraceful woman for the same purposes.
Even someone omnipotent might, of course, have reasons of a different sort, e.g. in that a particular type of mother might have been a better “PR move” than another. (Just making the people believe in Jesus by divine power might have been possible, but would have clashed with the idea of free will.)
Likewise, the one mother might have made for the better personal development of the child than the other. (One of my own pet ideas to explain, on a strictly hypothetical basis, the change in tack from the Old to the New Testament is that there were limits on omniscience and that Jesus, in part, served to gain an insider’s view of humanity, which brought about that change of tack. This might require a very strong human aspect and a strongly human upbringing. The omnipotent could, of course, otherwise just have put a fully adult Jesus on the Earth and might have no need to go through with crucifixion and whatnot.)
It is also unlikely that, say, a whore-for-a-mother would have fazed God and/or Jesus, because (a) being fazed would also have put a limit of sorts on them, as if they were bound by human standards, (b) the Jesus of the New Testament shows a very large tolerance of whores, tax collectors, and whatnots.
However, a possibility is that the incarnation of Jesus was contingent on someone of sufficient purity for other reasons, notably, as a demonstration of humanity reaching some minimum-bar of worthiness before it was granted that incarnation. (Note a recurring Old Testament theme of worthy individuals being sufficient to grant favors to unworthy masses, as with Sodom and the hypothetical ten righteous men, or how someone worthy might receive special treatment, as with Noah and the flood.)
Indeed, even within the idea of the immaculate conception, we have an aspect of purification through divine intervention, that Mary, even if of unusual natural purity, was not perfectly pure and became immaculate only through intervention. To assume that purifying, say, a whore would be beyond the omnipotent would, again, limit omnipotence. (Indeed, so would the assumption that the mother needed to be immaculate in the first place.) Of course, looking at various later takes of Mary as an “über-saint”, a subject of prayers directed at her, and similar, it might not matter how she became what she was—just that she had done so.
From a more psychological point of view, with less of divine intervention or in a scenario of Jesus-was-just-a-religious-leader, it would be perfectly plausible that Mary is simply retrospectively viewed as this-and-that because she was the mother of Jesus. (As opposed to having been chosen to be his mother because she already was, or could be made, this-and-that.)
The following is an automatically generated list of other pages linking to this one. These may or may not contain further content relevant to this topic.