Michael Eriksson
A Swede in Germany
Home » Misc. | About me Impressum Contact Sitemap

Thoughts on help

Introduction

A recurring topic in my own thoughts is help, charity, government support, and similar, especially with an eye at when it is warranted, when and how it might do harm, the usually highly inept and wasteful implementations of governmental help, problems with charities, etc.

In due time, I intend to cover such issue in some depth below. For now, the page is in a state of occasional growth, after being started with the discussion of an event that took place in the morning of the day of page creation (2023-12-04), but which is suitable to illustrate several of the relevant issues. (And which provided the impetus to actually start the page.)

A side-effect of this occasional growth is some degree of repetition.

To preempt portions of the yet-to-be-written discussions, however: A central point of my own thought on the subject, and a central point of self-determination, is not that we should not help, but that the decision whether to help must be left to the individual potential helper. This distinction (and some similar distinctions) is particularly important in light of misleading Leftist propaganda directed against e.g. those who want small government or more reasonable taxes, often by attempting to equate, say, a wish for lower taxes with pure egoism—never mind questions like the injustice of taxes (at least, beyond a reasonable minimum), what incentives are given to the people, and what the effects on the overall economy are. But more on that in due time.

Older texts with overlapping contents include [1], [2], [3], [4].

Queue skipping denied

Remark on costs and stakes

The below event is trivial in terms of e.g. costs and stakes. It is still a useful illustration of principles (and most of the below reflects what went through my head during the few minutes of walking home), if with the reservation that the reader might have to make mental adjustments for a situation with higher costs, stakes, and whatnot.

Main discussion

Earlier today, I was grocery shopping and next in line for the cashier. A teenager asked to be let ahead of me, as he had just one item and needed to get to school. (The latter, presumably, with the implication that he was running late.) I turned him down.

Why?

My main reason was that I had all of half-a-dozen items, all already on the conveyor belt, and that any time gained for him would have been minimal, which made his request almost entirely pointless from a get-to-school-in-time point of view. If we, on the other hand, discount the get-to-school-in-time issue, then some small time spent in a queue would have been transferred from him to me for no good reason.

Had I had two dozen items, let alone a full shopping cart, this would have been a very different matter, and I would gladly have let him pass. Ditto, if he had some far greater motivation, say, that a medical emergency of some sort was plausibly claimed. Ditto, likely, if I had not yet begun to move my purchases onto the belt. Ditto, with a politeness factor, if we had arrived at the same time and “who goes first” would otherwise have been a coin-toss decision. As is? Again, the request was pointless.

Moreover, my loss of time would have been almost as large as his gain, as the “marginal cost” of just moving an item past the scanner is small in comparison with the overall transaction. (Here I go by expectation value. If, say, he had to search for money while I did not, he might well have taken longer. Also see some minor hypothetical calculations below.) Even from a “utilitarian” position, letting him pass would have been very dubious—and even apart from the great caution needed before applying utilitarian arguments and apart from the short time spans involved.

For help to be given, the help must make sense. Here the help made no or only minimal sense.

Further, we have to consider the cost to the helper (something all too often forgotten), be it by not asking for help at a drop of a hat or through offering some type of recompense. Above, the costs were too small to bother with this (except in as far as they reduced the utilitarian gains); however, this is by no means always the case. Consider something as trivial as having a helper over to drill a few holes in the wall: the actual work might be done in a minute, but then we have the time to travel back and forth, the time to find the drill, and potential other costs, e.g. for gasoline. Measuring the cost to the helper only by the minute spent drilling would be highly unfair.


Side-note:

Here I draw on actual scenarios from my childhood, where my divorced mother sometimes enlisted the help of her handy and rich-with-tools brother. (But, in doubt, the point is the illustration, not how common or uncommon a scenario is. Note, e.g., how the same event would look if a professional handyman was enlisted and payment for more than that minute-or-so of work was refused.)

Here, between brother and sister, a typical payment might have been a cup of coffee and some cookies or a later favor in return, but it is very possible that he would have helped her regardless. This brings us to another point of self-determination: I do not say that a helper should charge for help, but that whether he does should be up to him. Likewise, the helped should have the decency to offer, even if the helper does not ask and unless the help only incurs trivial costs to the helper. (And note that neither costs nor recompense are necessarily in the form of money. The most typical cost is likely in time, not money. Recompense might be best or most often in form of money, but other forms are certainly possible.)


Then we have the issue of own responsibility: If the above teen was running late for school, why? Who had caused that situation? Why was he in the store now, when he could have waited for the next break? Etc. It might well be that a portion of the blame fell on others, e.g. in that he had travelled by bus and that the bus had been delayed. However, considering that this was in the middle of the city and that most of the students (I presume) live within walking distance from the school, chances are that own faulty planning was to blame. Even if not, he had options, e.g. to wait for the next break. Depending on details, options like planning ahead by keeping a stash of items in his locker or bringing an item from home, instead of buying one before school, might apply. (I did not pay attention to what exact item he had, but it gave the impression of being some type of snack, and snacks are easily stashable. I also note that (a) I virtually never snacked during my own, long ago, school hours, (b) snacks, even today, are better left out in favor of a healthy breakfast.)

Now, there is nothing wrong with asking for help when we have screwed up, but the incentives for others to help are the smaller and, for bigger things, the need to offer recompense the larger. Often, as with someone very young and still learning, it can pay to not extend help, so that he has incentives to plan better the next time around (or whatever might apply).


Side-note:

Indeed, I explicitly told him to plan better. More interestingly, my mind moved to a firm “no” the moment that he tacked on something like “I need to get to school”. (I do not remember his exact words.) This addition, in my mind, turned him from someone with an odd and unreasonable (in type, if not size) request to someone trying to dump his own failure onto others.

This impression could, of course, be faulty, but remember that I had to make an on-the-spot decision. Moreover, the main issue of pointlessness still holds and I would probably have turned him down anyway. (There is some possibility that I would have waived him through for reasons of politeness or to follow the path of least resistance, had he stopped at something like “Excuse me, but I have only the one item. May I skip ahead?”.)


Of course, if the time potentially saved by jumping ahead in the queue had been significant to the purpose of getting to school in time, then leaving the store with no item at all would have been even better, as the time saved would have been several times larger. (Ditto not going into the store in the first place.)

Excursion on effect on others

The totality of the queue in our case was three persons: an old lady, currently at the cashier, I, and the teen.

If we imagine the same scenario with a longer queue, however, we would also have to consider how others are affected. I would certainly have been wrong in letting the teen through without the approval of any others that he automatically would have leapfrogged.

Exactly a failure to consider others is quite common, however. This especially when then government tries to help or, often, “help”, as the government rarely seems to see further than the situation of those to be helped, side-effects are not considered, and too little consideration is given to other parties and their interests. Applying typical government approaches, we might well have had someone reason that the time gained by the teen was (with a longer queue) much larger than the time lost by me alone, and that I would be obligated to let him pass. This ignores the increase of time for everyone else in the queue, some of whom might also be in a hurry. (And it gives yet another example of where self-determination is important.)

To look more closely at time gained/lost/whatnot:

Regardless of the size of the queue, the overall time to process all the current queuers would have remained constant, no matter where in the queue the teen had ultimately landed. Moreover, the accumulated queue time over all queuers would only have been slightly changed. (For simplicity, I include time spent interacting with the cashier, paying, etc., in “queue time” and similar formulations.)

For instance, in a queue with just the two of us (ignoring the old lady): if I had needed t (of some time unit) and he 0.7t, the overall time would have been 1.7t (i.e. the last guy has a total time of 1.7t), regardless of whether he went first or last.

The accumulated time if I went first would have been 1t + 1.7t = 2.7t (i.e. total time needed by me + total time needed by him; or, from a different perspective, the time that we both queue times two + the time that only he queues, or 2t + 0.7t). If he went first, it would be 0.7t + 1.7t = 2.4t (or 1.4t + 1t = 2.4t).

For a longer queue, the relative difference would have been smaller still. For instance, assume five customers like me and one like him: The overall time is now 5.7, regardless, while accumulated time varies from 19.2 to 20.7, for the extremes when he goes first resp. last. (Again as the sum of the time queued by the first in line + that of the second in line + [etc.], resp. six times the time that all six queue together + five times the time that only the five last queue together + [etc.].)

However, such calculations show how beneficial good queue management can be in other situations, e.g. by prioritizing shorter jobs in a computer context. In the specific case of grocery stores, the options are more limited due to the strong real-time character of the “jobs”, basic fairness between customers (up to and including the risk that someone with unusually many items has to wait for hours), and the risk that customers try to game the system: a queue dedicated for, say, “no more than X items” might be very beneficial, but e.g. a re-sorting of each queue to consistently prioritize customers with fewer items would be both unconscionable and impractical.

Excursion on help to multiple persons and/or events

An interesting complication is what happens when multiple persons would have equal interest in some type of help. For instance, it is not unusual for teens on breaks to visit grocery stores in larger groups, each buying just one item. If I had allowed the one teen above to move ahead, and he had been, a few seconds later, followed by several others, I would have had the choice between treating them inconsistently or taking a now considerable larger delay to let them all through. It could be argued that a need to reject him arises from consistency and conscionability concerns based on the mere risk that others follow. (However, this thought only occurs to me during writing and did not affect my decision.)

The same applies to repeated situations of a similar nature: Letting someone with just one item through once does little harm—but what if everyone with just one item, spread over many occasions, would make the same request? It could accumulate to a considerable time over the years. (And the overall accumulation over all queuers and time could be massive.)

More generally, it is often the case that helping a single person is no major sacrifice, while helping everyone with a similarly strong or weak case could be a very major sacrifice indeed. Contrast e.g. giving a dollar to a single beggar with giving one to every beggar in the country or, even, sometimes, the city at hand. Worse, such help might end up rewarding bad behaviors, e.g. in that an intrusive beggar receives a dollar while the unintrusive does not, that an intrusive charity receives a donation and the unintrusive does not, etc.

Excursion on queue ethics and etiquette

While the above teen did nothing wrong from an ethical or etiquette point of view, there are regrettably many cases where queuers are out of line (no pun intended).

A particularly annoying example occurred to me a few years back, when a single check-out counter in a clothing store had a long and slow-moving queue. A second counter was opened and I lucked out in now being second in line—at which point the woman who was first in line waved in a friend to join her, and to do so with enough clothes to match half-a-dozen regular customers. Neither of the two had any recognition that they were doing something unethical. On the contrary, they had the audacity to suggest that I (!!!) go back to the other queue.

Another took place in a lengthy queue at, maybe, a McDonald’s, where a homeless-looking man began by standing next to the queue, slightly ahead of my position, but at some sideways distance. (And well forward of the end of the queue, where a legitimate queuer would have gone.) He then sidled closer and closer, until the sideways distance was gone—at which point he tried to jump in front of me, as if he had arrived much earlier and been in the queue all along.

Other rare extremes include queuers abandoning their intended purchases and shopping carts, after already queueing, or going away for some forgotten item and not returning before “their turn” had already arrived, thereby blocking the progress of everyone else.

Help that defeats a purpose

Help is sometimes outright harmful, especially when misimplemented. A particularly important case is help that defeats a purpose, e.g. when someone is helped to pass a test that he otherwise would have failed. The test, however, was there for a reason and now its purpose has been defeated.

Consider, at an extreme, a test for a driver’s license, where the examiner (or whatever the term might be) passes a failing driver for spurious reasons (e.g. personal sympathy or a bribe). The driver might well be happy to finally have that license, but what of traffic safety? What if a severe flaw in skills or judgment demonstrated during that test leads to a lethal accident a few weeks later? Take innocent lives lost, damage to cars, a road blocked for hours after the crash, expensive police and insurance investigations, whatnot, and weigh them against someone being happy over an undeserved driver’s license. For that matter, if put in the shoes of a failing driver, would you rather be dead with a license or alive without one?


Side-note:

However, the reason why someone is failing can to some degree play in and some leeway should be given examiners when mitigating circumstances are present and/or the issue is of sufficiently low importance. For instance, if someone fails a test of parallel parking through obvious nervousness, a do-over might be warranted, as (a) parallel parking rarely risks human life and (b) the nervousness is likely to be test specific and not something that will occur during everyday driving. (Also, many go through a life as drivers without actually having a need to parallel park.) In contrast, running a red light should be grounds for an immediate fail.

Likewise, with an eye at the below, some limited hinting by the professor during an oral examination might well be acceptable, if it serves e.g. to unlock some knowledge that the student already has or to steer someone who is just a little of course in the right direction. (A good example that matches my own experiences and is understandable to the average reader is hard to give. Strictly to illustrate the principle, however, consider someone who has listed the names of all but one of Santa’s reindeer and is now prompted with the first letter, nothing more, for the missing one.)


Less extreme examples are more common. Consider e.g. surreptitiously giving hints to a struggling fellow student during a test. This might help him in the moment, but it might well hurt others (especially, if there is a local culture of such cheating). What if an undeserved pass distorts the grades of others, because the teacher grades on a curve? What if the value of a particular degree is hollowed out? What if the struggling student goes on to do shoddy work with an inflated credential? If in doubt, from an educational point of view, it would be better for him to re-take the course and pass fairly than to pass the course unfairly and with an insufficient understanding. (Note that one of the main purposes of higher education is to serve as a filter. This type of “help” undermines this already severely undermined purpose further.)

Certainly, such unhelpful help is not limited to tests. Consider e.g. a spotter in a gym. A spotter is there for safety when something goes wrong—not to do half the job of lifting. A little help on a final repetition might, depending on personal training philosophy, be acceptable but what of a spotter who contributes just several percent of the force throughout each and every repetition? This defeats the purpose of training and chances are that the training effect would be better without such help (even if accompanied by a lowering of weight) and the lifter’s understanding of his own level would be that much better. Such a spotter does not truly help—he just strokes the ego of his lifter.

Or consider a child who wants to learn to do something himself, say, putting on a pair of mittens. If someone else steps in to perform the task for him to soon, his opportunity to learn is diminished and his purpose defeated.


Side-note:

However, note the difference between such defeated purposes and otherwise unhelpful help. A mother, e.g., might be well advised to not just do everything for the children, including putting on their mittens, before they have tried on their own; however, without a purpose there is no purpose to defeat, and we have more of a “give a man a fish” situation. (Which will likely be discussed on this page at some later date.)

Some care must also be taken to understand the purpose at hand. For instance, an earlier version of the spotter discussion above included “but what of a spotter who consistently puts in even a little force on each and every repetition?”. This formulation is either short-sighted or risks being misunderstood (depending on whether my original thoughts or my original writing missed the mark), because there might, for a given training philosophy, be legitimate reasons to do so, e.g. to help the lifter over a small bump in the repetition in order to keep a higher load over the rest of it than would otherwise be possible, or to manage a higher load (with help) during the “positive” part of the repetition so that the same higher load can be used during the “negative” (without help; note that humans are stronger on the “negative” than on the “positive”). Of course, this presupposes that there actually is such a purpose—and I would contend that a great many spotters “help” without having such a purposes or, maybe, act with an ill-advised purpose of ego stroking. Indeed, with such a purposeful approach, the term “spotter” becomes misleading, because a spotter in a strict sense truly is there for when something goes wrong. (However, if there is a word to cover the alternate roles, I am not aware of it.) The new formulation attempts to stick to cases almost certain to be purposeless or ill-advised, if at the cost of describing a less likely scenario than in most actual cases of purposeless or ill-advised “help”.

Another superficially similar category involves defeating a purpose, e.g. a test, for a non-helping reason. Consider e.g. reducing academic strictness to increase “diversity”. This is done for ideological or political reasons—not to help. (Or, if an element of help is involved, it simultaneously works to the immediate disadvantage of others by robbing Peter to “help” Paul.) Ditto the use of lowered physical criteria for female fire-fighters, soldiers, whatnot, relative their male counterparts.


Helping the deserving, the needy, or the complaining?

A common issue is that help is given to the wrong persons and/or for the wrong reasons.

Consider the effects of merely complaining vs. not complaining, complaining in different ways, requesting help in a factual manner vs. requesting help with a sob story, whatnot. Very often, especially with women as prospective helpers, a good sob story trumps actual need and how deserving or undeserving of help someone is. Say that a child is doing a school project. Just short of completion, some project-destroying event takes place (what does not matter; feel free to imagine a dog eating it). In one alternate reality, the child curses the stars, gets back to work, and, a few hours later, has the project finished. In a second, the child asks for help in a factual manner. In a third, the child calls for mommy, cries in despair, and begs for help. Which of the three is how deserving of help and which how likely to actually get help? (And, in the third reality, what is the chance that mommy ends doing most or all of the work?)


Side-note:

I choose a child-based scenario for easy illustration and to avoid distractions through political disagreements. These issues, however, are by no means limited to children. Ditto other examples used.



Side-note:

A particular complication, as with the first reality above, is that those with the (in most, but not all, situations) best mentality are very unlikely to get help, for the simple reason that no-one knows that they might benefit from help.

More off topic: These are also the ones most likely to develop their competence, be more able to handle themselves, themselves be in a position to help others, etc. A particular twist is that these are particularly valuable in an office setting, while those with a lower threshold for asking for help (often wasting the time of others through failing to think, read the manual, search the Internet, whatnot, before asking for help) sometimes have a paradoxical career advantage through the resulting networking. I would encourage decision makers to consider such factors when deciding who gets a promotion, who is put in charge of a team, etc.

Looking at children: Having been a spoilt and over-protected child myself, I would encourage parents to take a stricter line with children and own responsibility. Yes, the child might think the parents “stupid” or “mean” here and now, but this is a small price for a better long-term development. (I managed to change on my own as I grew up; my sister, with a very similar childhood and upbringing, did not.) I would certainly view parental help with homework as something that should normally be limited to review, feedback, and, maybe, some amount of “teach a man how to fish”. (The above scenario could, depending on details, be a legitimate exception.)


If we look at these three alternate realities, as the child is otherwise the same, it is clear that help was not needed. Help might have been beneficial—but it was certainly not necessary. We now have to consider questions like opportunity costs and use of resources: Even here, help might have been a poor investment relative other tasks. (Even from a strictly altruistic point of view. More nuanced viewpoints would also consider factors like a reduction in rest and relaxation for the helping mother.) To give good examples is hard, as different persons will have different priorities and life situations, but consider, to get a general idea, replacing help with this project with one or more of: helping another child with a different task, cooking a “balanced meal” for the family, earning some money in the home office.

In more adult situations, opportunity costs and limited resources can become a major issue and helping the one often implies not helping the other. (Consider e.g. an individual deciding whether to volunteer for the one organization or the other, or the government whether to hand out tax-payers’ money to the one group or the other.) Here it is very important to consider where help is actually needed and where merely wanted, who has reasons and who a sob story, etc. What happens is all too often the reverse—the sob story wins. (Or the bigger liar, the larger voter block, the more powerful lobby, whatnot.)


Side-note:

Governmental help is also often sufficiently mis-designed as to do more harm than good, be cost-ineffective, last forever, artificially increase the number in “need” of help, or similar—and to exactly fail on issues like true need vs. sob stories, opportunity costs, etc.

I will expand on this at a later time, likely using the German coal industry as a central negative example.


A good potential example of other misprioritization took place during my second master: I had to skip a written exam due to illness. I contacted the professor in charge of the course for information on how to proceed. He was completely and utterly uncooperative, not only refusing to give me relevant information but to even tell me who else could/would. His excuse: he was so busy with helping students that he had no time for me (notably, a student in need of help). Here, I would speculate (hence: potential example) that he saw himself as limited to answering questions dealing with course contents—not to those dealing with the course, as such. My requests for help (or, more accurately, information) were far worthier of help. Learning the course contents is the job of the students and, with few exceptions, if they fail it is their problem. I had done my job and mastered the course materials; and I had a reasonable expectation to be helped to demonstrate this and collect my corresponding credits with, in all likelihood, the German equivalent of a solid A. To prioritize incompetent or lazy students over the competent and industrious in such a manner is inexcusable.

More generally, and even if I am wrong above, schools/unis/whatnot seem to have a very weird understanding of their students and a weird sense of priorities. The idea of (even adult, higher education) students having to be led by the hand, and this being considered perfectly normal, is a recurring issue. Often, it amounts to attempts to drag students who are not “college material” over the finishing line, which might be a “help” to the individual “helped” student, but is also a poor use of resources and something that does net-damage to the world, through mechanisms like hollowing out the value of a degree, reducing competence levels among graduates, reducing the filtering on own ability, etc. Even the “helped” student is only “helped” in as far as a degree (credit, whatnot) is attained—whether he has actually been helped to mastery of what should be mastered is to be doubted. If anything, premature help is likely to do more harm than good by reducing the level of own thinking that the respective student performs. Certainly, to speak of deserved help would be absurd.


Side-note:

Exceptions to this “job of the students” principle arise when the incompetence of lecturers, text-book authors, whatnot, get in the way of the students in an unconscionable manner. This was by no means the case here. I would also go as far as to say that it was one of the easier courses that I have taken over the years, which increases my suspicions that (cf. above) we had an issue with students who were not college material and the help correspondingly misplaced.

An exception on another level arises when sufficient extenuating circumstances are present, e.g. when someone blind is faced with a non-braille, non-audio book or a written exam. However, these cannot reasonably have constituted more than a small fraction of the overall students.



Side-note:

What exact shape my requests took, I do not remember in detail (after roughly twenty years), but they likely related to whether I could take the exam the next year/semester without taking the course a second time, whether there were earlier or alternative exam opportunities, and similar. At any rate, the scenario must have been a fairly common one and an answer should have been readily available.

(As I had plenty of credits, I ultimately did not bother with finalizing this course—and I still do not know what the answers would have been. With hindsight, however, I regret the quite few occasions when I have left credits lying, especially when I terminated my in-parallel-with-my-main-studies business studies for issues like quality, and when I moved to Germany as an exchange student without tying up a few loose ends in Sweden, in the mistaken assumption that I would return and have time to do so later. In sum, I might have neglected more than a semesters worth of all-but-examination or almost-all-but-examination credits.)


Helping complainers who fail

As a special case of the previous section, there are many cases where help (and/or other special treatment) might be warranted based on a criterion like “compensate for a handicap” or “give someone a fair chance”. This, especially, when it comes to tests of ability, admissions tests, and similar. Likewise, there are many cases where an absence of such help might have resulted in failure and/or where a hard-to-help issue can make the difference between a higher or a lower grade, a passing or failing score, or similar.


Side-note:

Largely off topic to the rest of this entry, ideas like “compensate for a handicap” and “give someone a fair chance” can apply much more generally—and often more productively. For instance, helping someone with poor-but-correctable-with-glasses eyesight to a pair of suitable glasses could make a world of difference in e.g. the ability to earn money through various types of work.

Such an example can become more on topic if, say, the near-sighted have a powerful complaining lobby, the far-sighted do not, and help is only given to the near-sighted.

Of course, in the spirit of the overall text, I do not necessarily say that help should be given with no strings attached, e.g. in the form of free glasses. In the modern Western world, notably, glasses are usually so cheap relative income that the idea borders on the ridiculous. (Exception to cheap fall into two main categories: Firstly and often, when various expensive “optionals” are added and/or something “designer” is chosen. Secondly and rarely, when a case is so medically complicated that unusual cost and effort is needed to provide the glasses.) Even in places and at times where this is/was different, however, a better system might have involved, e.g., “glasses today; payment when you have earned that extra money”.


However, many alleged such cases are not actually about a “fair chance” but an “unfair leg up” or amount to excuse making, e.g. based on the success of the one and the failure of the other, when both have the same issue to cope with. This, in particular, with those who complain about this-and-that when they fail—in contrast to those with the same problem who either succeed or do not complain even when failing.

Consider nervousness: I have repeatedly heard weak students (or, as case may have it, students with weak scores/grades/whatnot) complain that nerves got in the way of doing well on a test, be it directly, during the test, or more indirectly, e.g. through problems with sleep before the test.

Now, I do not rule out that some of these have had unusually bad problems, but there is nothing about nerves that are unique to them. I have often been nervous before or during a test to the point that it hampered my performance—but I have usually had enough reserves/buffers/whatnot, by dint of good brains or good preparation, to pass the test despite such issues—often with an “A”.

And why should a top student be less nervous than a poor or middling student? Yes, the top student has a greater chance of passing the test, but, usually, is held to a higher standard and/or holds himself to a higher standard. In my days, with far less grade and other inflation, that “A” was not handed out for nothing and the top student might have gone in with a great deal of nervousness as to whether his performance would be enough for an “A”—the more so as a single moment of carelessness, an even mildly hostile corrector, or a disagreement about matters like how to handle units in calculations can lead to a critical loss of points, and often in an unexpected manner. (While such issues are trivialities compared to own incompetence for most students trying to earn a “D”.)


Side-note:

A text on showing the work has some contents relevant to issues like correctors and disagreements, including about use of units in calculations.



Side-note:

My first attempt at the driver’s test for my driver’s license is not only a good example of an exception to my passing a test despite nerves, but can also illustrate some issues around top vs. poor/middling students:

I was a poor driver, I knew that I was a poor driver, and I was correspondingly unusually nervous. Nerves made me perform considerably worse than I usually do—and, this time, I did not have the reserves to compensate. (The nervousness might also have been increased by the unusual test situation, similar to the below discussion of an oral examination.)

It might now be that someone naive tries to answer the above rhetorical question with “See! The poor students do have a greater risk of nervousness!”, which not only overlooks the aforementioned issue of expectations but also reverses the causalities. Yes, such an increase in nervousness might magnify a deficit, but it does not create the deficit—it is created by the deficit. Resolve the deficit, e.g. by more hours spent with the books, and any “excess nervousness” will decrease correspondingly. (And the reserves will increase.) So with me: I came into my second attempt better prepared (more hours at the wheel) and less nervous—and I passed.

A slightly more justified counter is that past failures can result in nervousness today, even in someone well prepared, because the pattern of failure is remembered. That, however, will pass in time; is also, usually, ultimately explained by past own insufficiency relative the expectation at hand; and is the same for an aspiring-to-be-top student who has missed that “A” once too often as it is for the poor student who has received a failing grade once too often.


Looking at the modern U.S high school (or, at a minimum, fictional representations thereof), many top students are anxious because a single “B” could ruin the chance to get into a top college, which could have a far larger impact on life outcomes than just having to repeat a class in summer school. Or consider tests like the SAT—a single test that can make a life-changing difference in outcomes and which only allows for very limited repetitions (within the time frame that is relevant for most students aiming for college, which is what counts here). For my part, I took the approximate Swedish equivalent knowing that I needed a very high score to be admitted where I wanted to go, was extremely nervous, and still managed a perfect normalized score (2.0).

More generally, many complain about “not testing well” (sometimes because of nerves, sometimes for some other reason, often without stating any reason at all). But here is the thing: very many of those who “test well” in the sense that they pass most tests or get high scores on most tests, do not “test well” in any real sense. They are smart and have worked hard and pass the tests while actually underperforming. In my case, e.g., I have often made the experience that I pass a practice test in the comfort of my own home with flying colors and much time to spare and then struggle with the real test, because of nerves, because I just lock up, because I have, for the umpteenth time, obsessed with last-minute cramming when I would have been better advised to get a full eight hours of sleep, or similar.


Side-note:

The exact reasons for underperformance, or risk thereof, vary from test to test. On one occasion, for then mandatory tests for Swedish military service, I developed a high fever and a severe headache in the night before the journey to the test center, which took place early in the morning, and still managed to ace a quasi-I.Q. test later in the day. With the additional stress from the two days of testing and travel, and exposure to the Swedish winter, I ended up being on sick leave from school for two weeks after that. (But, no, a vanilla cold is the only reason why Lacy Moronne flunked that math test.)



Side-note:

Off topic, but as partial explanations to the above, two of the best pieces of advice for someone who wants to do well on a test, maybe even “test well”:

Firstly, if practice tests are available, take them and make sure to understand any solutions and solution approaches provided with the tests.

Secondly, very contrary to my own bad habits, getting sleep in the night before an important test is much more important than last-minute cramming. (This the more so, when the student has studied well leading up to the test, because what is crammed at the last minute will, then, almost always be either repetition of something already known or some details highly unlikely to actually occur on the test.)


The appearance of “testing well” is often the result of someone managing a middling to poor effort relative a base level that is sufficiently high that the result is still good; the appearance of “not testing well”, on the other hand, might combine a middling to poor effort relative a much lower base level.

What truly happens in many cases is that someone who fails while testing below his “true” level, while being nervous, while having some type of handicap, whatnot, does complain, while those who succeed do not complain, even when they are just as far below their “true” levels, are just as nervous, are just as hindered by the same handicap, ... This can create the impression of a stronger connection between e.g. being nervous and failing (and, maybe, a weaker one between e.g. being poorly prepared and failing) than is actually present.

Consider the special case of support for Aspies. This might or might not be a worthy cause, but I am an Aspie too (or something else with very similar “symptoms”) and I still pushed through without such support—Aspies not yet “being a thing” when I went to school. A particular problem with various tests was my lousy handwriting, which, likely, to a large part was an Aspie issue (it is a very common with Aspies). Not only did I not receive support, e.g. more time on tests so that I could take more time to form letters accurately or an exemption to use a typewriter for a home essay, where handwriting was normally expected, but I received outright complaints, might have seen point deductions of which I was not aware, and, on one occasion, was forced to rewrite an entire essay because some member of faculty refused to read it in its original state. I still pushed through to get good grades, get into a first-rate-by-Swedish-standards college, and do well there.


Side-note:

The issue of handwriting is complex, including that my problems with legibility relative my age peers were the worse the younger I was. The aforementioned essay might have been around age 14/15.

A non-Aspie complication is that I think faster than most others, which makes it harder not to drop quality in writing lest the already enormous, and so frustrating, difference between thinking speed and writing speed grow even larger.

Some effects can be indirect, e.g. in that problems with handwriting in the first years of school did not result in support to learn better, but in criticism, a supposition that I was not making an effort, or similar. (And combined with the type of learning imposed, mostly the mindless copying of letters, my interest in writing and the likelihood that I would try to learn handwriting in private dropped considerably, removing the chance that I would find a better way to learn than what school provided.) This supposition that I was not making an effort or was, somehow, mean or disrespectful to the poor teachers resurfaced repeatedly throughout my school years.



Side-note:

What support is suitable for an Aspie is a tricky question, in particular with an eye at individual variation. However, I am opposed to anything that amounts to pampering or risks over-compensating with regard to tests and similar aspects of school.

In my own case, something that could have been very beneficial for tests without over-compensating would be to simply have the teachers know that “Michael’s handwriting is what it is. He is not mean or disrespectful, he is not refusing to make an effort, he is not [whatnot]. Please bear with him.” (cf. the previous side-note). Even better, outside of tests, would have been some type of additional help to reduce the underlying problem of poor handwriting in a manner more constructive, less dreary, and less demotivating than the aforementioned mindless copying of letters.

The most beneficial type of help, however, might have been in other areas entirely and would have been unlikely to affect testing in a non-trivial manner, including issues with overly noisy kids and how to handle “social skills”.


Or take support for immigrants and those learning the local language as a second language in school. This can be a very good idea, but it must be kept within reasonable limits. If, say, a teenaged second-generation immigrant is so poor at the local language that he needs additional support, chances are that he has himself to blame and that school is innocent. More likely, as claimed by some insiders, this is actually a weak student who gets by, often at the schools instigation, with the excuse of having poor language skills instead of poor thinking skills (or instead of e.g. being lazy).

When I moved to Germany, I had six years of “school German”, at a few hours per week, to draw upon and I managed to get along as an exchange student at the university level. (As did the other exchange students.) Why then should a native-born teenager have problems with the language in high school? It is absurd.


Side-note:

Switching to German, my third language, made things much harder—true. However, they were still manageable. If I found words that I did not understand in a textbook, I grabbed a dictionary. If I did not understand anything spoken, I asked. Etc. At no point were I offered, say, more time on tests or translations of tests from German into Swedish—and I still managed. (And not only did I not expect such offers, chances are that I would have been a bit insulted, had they actually been made.)

I grant that I took courses in fields (math and whatnot) where essay questions were rare and regular essay writing unheard of, but, truly, what proportion of these teenaged second-generation immigrants have great math scores while they fail on essay questions?

To boot, as is standard in much of Swedish and German higher education, I wrote both my master theses in English, which is my second language, and where I had learnt far less, even then, in school than I had from TV and books. While my exposure to English today, at 50, might exceed that of many teenagers in even their native language, this certainly did not apply when I wrote that first thesis—so what excuse could a teenaged second-generation immigrant reasonably have for needing special treatment when it comes to essay questions? Hardly one based on being a teenaged second-generation immigrant. (As opposed to someone with a more legitimate cause to have problems with essay questions, who coincidentally is also a teenaged second-generation immigrant—and, of course, as opposed to a teenaged fresh-of-the-boat first-generation immigrant with minimal pre-arrival exposure to the language.)


From another point of view, some restrictions can be needed to avoid a distortion in filter and certification mechanisms of various types. What, e.g., if someone, for some reason, is awarded extra time to take a test of professional relevance and that same reason will also slow him down during later work within the profession? Or consider a dyslectic high-school senior who can extract the meaning of any given text, but needs considerably longer to do so than the average non-dyslectic age peer. Say that he is given extra time to compensate for his dyslexia when he takes an SAT-like placement/aptitude/whatnot test. This might be fair with regard to “has a certain level of knowledge and intelligence”, but could seriously distort the filtering effect and might leave him over his head when he needs to keep up with various readings, because the college cannot magically make the weeks longer for him than for everyone else, unlike a test organization extending a single test, and he now needs to compensate “on his own time”. (On the outside, the college could give help like more time before a “must graduate before” restriction kicks in or reducing his tuition fees per semester by some amount.)


Side-note:

What the best solution is, I leave unstated. Here, it might make sense to not give him a leg up with the test and to only let him into some college where his “natural” test score is competitive, and he, then, has a better chance of being a sufficiently strong student, because his fellow students might have some other deficit than dyslexia relative him (e.g. a lower I.Q. or a weaker motivation). Equally, it might make sense to give him that leg up and let him try to compensate through working longer hours post enrollment. The point is that such concerns must not be forgotten. At an extreme, there can be too big obstacles to surmount, e.g. in that a blind surgeon would be an intolerable risk with today’s technology. (But not necessarily with tomorrow’s. The example of blindness, incidentally, shows that a blanket “no test help” policy could lead to too extreme consequences, as college might turn into a pipe dream for the blind or the blind be forced to colleges that specialize in the blind.)

Some cases outside education are clearer, e.g. in that physical criteria for firefighters must not be lowered for women “because discrimination” and with no regard for the safety of the community. Either a physical criterion makes sense and should be kept for everyone or it does not and then it should be removed or altered for everyone. (This with minor reservations for special cases. There might e.g. be some test that is suitable for the one sex but not the other because breasts resp. balls could be squished. Chances are, however and should this situation actually arise, that some replacement test can be found that works well with both sexes.)


To prep or not to prep?

An interesting scenario with regard to issues like voluntary vs. mandatory help, rewarding preparedness and own responsibility vs. rewarding negligence, whatnot, is that of a prepper after that Bad Event:


Side-note:

The use of a prepper is a matter of illustration. The underlying issues are much farther going and by no means limited to extreme situations. The specific choice of a prepper and an extreme situation does have a considerable advantage in that resources are now limited in a manner more absolute, easier to understand, harder to give a pseudo-solution, whatnot, than in, say, an economic depression. (As examples of pseudo-solutions, try to solve a depression by just printing money and then solving the resulting inflation by instituting price controls.) There is also an automatic division into those who have and those who have not taken precautions, invested in their own safety/survival, and similar.

Looking at my own take, I am much closer to the happy-go-lucky villagers than the prepper in terms of, well, prepping. However, a general prepper attitude does have much to say for it, and, contrary to Leftist caricatures, is not a matter of the firm belief that society will collapse within the next few years. More common examples include preparedness for the eventuality of a car crash, a two-day electrical blackout, some locally plausible natural disaster (a hurricane in New Orleans, e.g.), and similar. For that matter, preppers are not even necessarily “Rightwing” or particularly interested in politics.

They are, however, predominantly male and readers with pronoun complaints can go shove them.


Take a village a fair distance from the rest of civilization. Most of the villagers are happy-go-lucky; one is a prepper, who has built up a store of food, water, and other supplies and equipment to last him and his family a month. The prepper is considered a bit weird and might even be ridiculed by the other villagers.

Then some Bad Event does happen and the village is cut off from the world, electricity is gone, the water pipes run dry, food supplies cover just a few days, etc.—and a restoration cannot be expected for weeks.

The prepper is relaxed, prides himself on his foresight, and begins to sit the situation out, confident that his supplies will tie him and his family over until help does come, services of this-and-that are restored, etc.

Then there is a knock on the door, after which a dozen villagers barge in and demand that he empty his stores for them and their families—after all, they are all hungry and he has food.

What now?

To begin with, a typical Leftist solution, that the prepper is seen as obligated to empty his stores to feed the others (and, if need be, will be forced to do so by brute force) is unlikely to do much good—never mind the associated injustice. Even with the persons at hand, the result would be food for a few days, followed by starvation. However, chances are that there are another few dozen families that would go without entirely, because they were not among the first to barge in, or because they did not see the prepper as obligated to feed them at the risk of his own starvation. (And, again, who is more worthy of help? Someone who immediately barges in and makes demands or someone who at least tries to find his own solution first, e.g. by going fishing or gathering berries?)

What the exact best solution is, I cannot say for sure and I invite the reader to give serious own thought to various potential choices, how to handle this-and-that, who does or not deserve help from what viewpoint, who might need prioritization even when undeserving, etc. The angles are endless and very instructive. (For an example of the potentially undeserving, consider the below-mentioned widow: If she, herself, was utterly undeserving, might the well-being of the children still cause her to be included? Likewise, if the children are saved, is it or is it not important that their mother is also saved? Keep in mind that saving her might mean the death of someone else.)

However, for a solution to realistically work, it must be based on the voluntary decisions of the prepper, who can best judge what his priorities are, who is or is not worthy of help, how much help can safely be given when, who might give help in other forms than just stores, etc. (Examples of such help, again, include fishing and gathering berries. He might have equipment, skills, and knowledge that the others lack, and might be able to help them help themselves from near-by lakes, forests, whatnot, instead of from his limited stores.)


Side-note:

Why not replace the prepper, as decision maker, with a committee? Firstly, it would be unjust to the prepper and preserve the fundamental problems with the “take someone else’s property” approach. In cases like this, the danger of a “wolves (plural) and the sheep (singular) voting about what/who is for dinner” situation are particularly large and the road to Socialism particularly short. Secondly, chances are that the committee would have less relevant information and judgment, leading to worse decisions. (And, in the overlap, these decisions would be less likely to reflect those of the prepper in terms of priorities, who is deserving, etc.) Thirdly, there are a number of more general problems with collective decision-making, as discussed in a separate text.


It might, in particular, be that he judges his resources large enough to help some, but not all, of the villagers, without risking the life of himself and his family. This, too, must be his choice, as well as whom and how many to pick for help. (He might then make choices like: cousin, yes; widow with three young children, yes; guy who offers a load of money, yes. But: asshole who called him a far-Right nutcase and conspiracy theorist, no; the never-worked-a-day-in-his-life slouch who mooches of government aid, no; the local juvie gang, no. We might even have constellations like “asked nicely, yes; barged through the door and demanded, no”.)

Helping those who do not make an effort of their own

Another grocery-related situation took place long ago:

A woman in a wheelchair asked me to hand her something from a shelf slightly out of her reach—which I, of course, did.

Through the next few years, I occasionally spotted the same woman, always asking other customers for the same type of help and, seemingly, doing so in a virtual blanket manner. If she ever grabbed something herself, it was something at truly trivial distances from her arms.

Likely, then, we have a woman who has simply decided to rely on others instead of learning how to handle matters herself. For instance, she could have bought a short reach extenderw and used this for most products. If and when this proved too troublesome, as might be the case for e.g. a carton of milk or something on the top-shelf, then asking for help would be the way to go.


Side-note:

Why “Likely”?

Because she might have had some issue that limited her more than merely being wheelchair bound, e.g. some type of muscle weakness or pain of such severity that it made further actions on her part unconscionably hard. As I do not know what the whole truth was, I can only speak in terms of likelihood. However, the point of the above is to illustrate a general principle, not to criticize a specific individual, and the illustration stands regardless, while being more easily accessible than many less physical cases of unnecessary reliance on others.

As an aside, another wheelchair situation provides a good illustration of how easy it is to get a situation wrong when we do not know the whole story: I once travelled standing near the entrance of an overfull railway cart. A portion of the entrance area was occupied by someone in a wheelchair, and, as the area filled more and more, I ended up standing in front of and blocking the line of sight to the wheelchair. After yet another halt with yet more passengers boarding, a man fairly brusquely asked me to stand aside, in the likely belief that there would be plenty of space behind me (which I was, then, needlessly blocking). I did stand aside—and he immediately became very apologetic.


From what I have seen, those with disabilities can often compensate for their disabilities in surprising manners, sometimes to virtually the same level as those without disabilities, sometimes merely to some degree, sometimes through tools, sometimes through training—but this requires that an effort is actually made. For instance, one of my first exposures to this idea was a TV feature about a woman, born without hands, who had learned to handle a very wide range of tasks with her feet. This while those lacking large parts of their legs can often function almost as if they had complete legs through modern prostheses. Some who, more similar to the woman above, are wheelchair-bound are active in sports, including something as complicated as basketball (if with modified rules). I have heard about at least one case of a blind man working as a software and/or web developer. Etc.

(At an extreme, near-sighted wearers of glasses could be viewed as a special case.)

However, here it is important to differ between the temporary and the permanent (or short-term and long-term), the commonly occurring and the rarely occurring, etc. For instance, I have on occasion been asked by a little old lady to hand her something from an out-of-reach top-shelf, and there is nothing wrong with this, as e.g. dragging a reach extender along for a small minority of items might be disproportionate. Likewise, I raised no objections and saw nothing wrong during the first encounter with the above woman: At the time, I assumed that she was new to her situation and had not yet had time to adapt, and a potential (remember that “Likely”!) reason to complain only arose with the addition of “the next few years”, after which she definitely had had such time.


Side-note:

However, I have also encountered a few weird situations, e.g. a very short woman who, when I took down two items from a high shelf for my own use, appropriated one of them as if I had taken it for her benefit after engaging in an act of mind-reading. (Fortunately, there were more of the same left.)

Another case involved a young, comparatively tall, and perfectly healthy looking woman, who asked for my help to get an item that she should have been able to reach by stretching on her toes or, on the outside, making a very small jump. The odder part is that she, over and over, addressed me with something very generic and information-less (what, I do not remember, but something like “Sorry!” would illustrate the principle). As I originally had no reason to see it as directed at me, it took her some ten to twenty seconds to at all get my attention. During this time, she stood rooted in the same spot, repeating herself in a monotone voice, while I was walking about—and I only finally reacted after (a) looking around to see what the fuss was about and (b) noticing that we were the only customers within sight.

(As even someone who did not speak German likely would have proceeded differently, e.g. by walking over, tapping me on the shoulder, and pointing at the shelf, I suspect that she had some mental deficiency; however, that is speculation. Someone who did speak German or English, with no mental deficiency, would almost certainly have tried something different, e.g. by replacing a second, let alone tenth, “Sorry!” with something like “Excuse me, the man in the blue jacket, would you mind helping me?” or by combining a shoulder tap with a verbal explanation.)


Children are often involved in similar cases: At least in recent decades, many children seem to take an attitude that “mother should handle this” (or similar) without giving it a proper own attempt first and without a willingness to learn. Likewise, they often take an attitude that this-or-that task would be the natural responsibility of the parents, be it in the sense that they are unduly reluctant to do it for themselves, even when fully capable of doing so, or that they are lacking in gratefulness when the parents do it for them. In these cases, it is very easy to end in a vicious circle, where too much help creates an expectation of help, which forces more help (or results in the children being whiny pests), which creates an even greater expectation of help, etc.

I was an at least partial example myself (my sister was worse). At an extreme, I remember being around six, falling while (cross-country) skiing, and trying to get my grandmother to put me back on my feet—and this repeatedly. (To her credit, she was not sympathetic.) While getting up again is not as easy at it might sound, what with long skis and poles tangling with each other and a simultaneously soft and slippery surface, the neighbors’ boy, who was my age, managed on his own. The difference? He had not been afraid to try and the accumulation of attempts had made him good at it.


Side-note:

I learned, too, comparatively soon, but a larger problem lasted another several years: I did not understand the benefits of training, and tended to dislike and avoid any physical activity that I did not manage (at least somewhat reasonably) in the first attempt, which prevented me from getting better at the activities that I did not manage.


While children often must rely on their parents (to a very high degree early on; less so as time passes), it can be a very good idea to institute a policy of “try it yourself first”—and to insist on a genuine attempt, not just something pro forma. Similarly, a policy of “if you can do it yourself, you should do it yourself” or “if you want it done, you do it” can be a good idea for many household chores and whatnots.

A particular complication is that too much, too early, and too consistent help can be a hindrance to the children’s development, especially with regard to self-sufficiency and taking responsibility for oneself. In some cases, e.g. the humor stereotype of a parent doing the child’s homework, help can be outright contrary to a purpose.

Something similar applies to many modern women, who often go for help without a serious own attempt, often because they have grown up with an expectation that some man or other should handle everything that is “hard” or leaves the familiar terrain. In my impression, this applies even to many women who are willing to perform tasks that are comparatively easy but take a long time, which makes laziness an insufficient explanation, but is well in line with e.g. an “afraid to try” explanation. However, this impression is more likely to be mistaken through a smaller set of observations.

Real problems, however, begin when we look at adult attitudes of “the government should do it for me”, “why should I bother with work when I can collect unemployment”, and similar—attitudes that are comparatively common today and common exactly because such attitudes have been rewarded. Again, then, we have a vicious circle of help creating an expectation of help, etc. (Note the difference between such freeloading and genuine short-term issues that the receiver of help tries to overcome. Ditto between systems that encourage freeloading and systems that help only those in genuine need.)


Side-note:

An even worse, but off-topic, problem are governmental attitudes like “we must do this for the citizens, even though the citizens got it done for themselves in the past” and “we must do this instead of the markets, because the markets cannot possibly manage, and never mind that they did manage in the past”.


Mistakenly not helping / the benefit of being explicit

An interesting reverse scenario to some of the above might have left me not helping when I should have:

I was travelling by train, seated next to a window. A young woman came to take the aisle seat, indicated her luggage, and said something vague about maybe-this-or-maybe-that, with an apparent intent that I help her put her luggage on the storage rack over the seats. Seeing that she was young, perfectly healthy seeming (outright athletic, in fact), and had not even tried to perform the task herself, I used the vagueness of her request as a reason for non-action (without explicitly turning her down).


Side-note:

This event is sufficiently far back that I do not remember the exact words exchanged; however, her statement was something along the lines of “maybe the young man could ...”, in the tone of a question, while both leaving unstated what the “young man” could do and failing to actually request help.

(In Germany, it is somewhat common for strange men to be addressed as “the young man”/“der junge Mann”, regardless of actual age. The young woman above, in contrast, actually was young by any reasonable standard.)


In a next step, she did manage to put up her luggage—but, to my horror, she did so only by standing on the arm rest (!) next to the aisle. (I was also, to my shame, so caught off guard that she had managed to complete the operation before I could, belatedly, offer my help.)

In this case, she was obviously capable of getting by on her own, but only at some personal risk, which changes matters: If there is risk for the one but not for the other, asking for help is much more legitimate than if no risk is present or both parties underlie the same risk (or, worse, the intended helper would be the one at greater risk). Similar applies if the respective expected effort/cost/whatnot is much smaller for the intended helper than the helpee. (But note that the legitimacy is limited to a request for help. An expectation of help is not legitimized. If in doubt, the intended helper could be in a different situation than presumed. Above, e.g., someone large and strong might also have been nursing a back or shoulder injury.)

Looking more in detail at my reaction:

  1. This might, in part, have been another case of my having too little time to think the situation over.

  2. Over the years, I have grown less likely to help others (especially, to volunteer help without a request) and had I been twenty instead of, maybe, forty, I almost certainly would have helped her.

    The reasons for this are complex, but an over-simplified version is that I have simply grown more cynical—maybe, over-cynical. In particular, I have learned that help and other favors given do not necessarily result in gratitude or (with counterparts that I have longer connections with, e.g. colleagues) reciprocal help/favors. At worst, a willingness to help can create nothing more than an expectation of further and future help.

    Many women taking male help for granted (cf. above) is a part of this, and there is always some risk of a “false positive”, that a legitimate request for help is mistaken for a lazy “oh, there is a man, I’ll get him to do it for me”. (And a sufficiently young version of me would not yet have caught on to such laziness and might not have reflected over the risk of encountering such women.)


    Addendum:

    Also see a later excursion, which, with some duplication, address the topic a little more deeply.


  3. An aspect of the previous item is that I have an increasing expectation that someone “gives it a try” first and only asks for help should the try fail.

    This is normally both good and legitimate, for reasons like fairness, self-development, and long-term effects. In software development, e.g., those who try to help themselves first (read the manual, search the Internet, experiment, whatnot) usually find a way and the attempt makes them more skilled and knowledgable for the future. Those who go straight to someone else for help take away the counterpart’s time, while, themselves, failing to learn.

    Here, however, it might have been that the young woman already knew that she was too short to comfortably and without risk handling the luggage. (Be it from a visual measure of the height of the rack or from prior experiences.) For my part, I could only tell that she had not given it a try “here and now”, but I had no knowledge of what might have happened twenty minutes earlier, after she, hypothetically, had boarded another train for an earlier leg of the same journey. (And judging her reach relative the height of the rack is much harder from a seated position and an awkward angle.)

  4. I do not like communication by hint, and chances are that her hinting approach put me off further. (Due to the time passed, I can only speculate. At a minimum, however, the hinting was what allowed me to avoid helping without having to refuse help.) Indeed, I often willfully ignore hints (even outside the context of help) as a matter of principle.

    Basic rule of communication: Say what you mean and mean what you say.

    If you do not, you are to blame for the consequences.

    When it comes specifically to help/favors/whatnot, it pays to be explicit with reasons, not just the wish. For instance, above, if she had said something like “Excuse me, young man, but I am too short to reach the rack with my luggage. Would you give me a hand?”, I certainly would have.


Side-note:

For the sake of completeness:

She did not proceed to enlist the help of someone else and no-one else volunteered.


Helping with money / lives vs. money / priority of help

Introduction

The core issue of the overall page can be described as “When to help whom how?”. Below, I will discuss some issues relating to this question and money, mostly based on an older text (moved here with slight adaptions; cf. below) and the idea of “Rule of Rescue” (cf. below). (However, neither do I claim that this would be an even remotely complete discussion of help and money, nor that the below discussion would necessarily be limited to money.)

Finite money and money vs. lives


Meta-information:

This text was originally published as a part of a 2024-12-22 entry on my 2024 various and sundry page. Most of the text has been moved here, extended and, in part, rewritten.

While this text is independent of the original entry, it might not hurt to read the remainder of that entry and the one preceding it for context. (Specifically, the then debates around the Omnibus and Trump bills, Democrats shrieking that “Republicans are evil for not spending money on pediatric cancer research”, and whatnot.) That the text originally arose in that context of government spending is still reflected in the overall take; however, the general idea is far more general and a limitation to government spending is not intended.


Claims are often made along the lines of “you cannot put a price on human life” or “no price is too high to save a human life”. Sometimes, this seems to be out of genuine naivety; sometimes, in an attempt to paint others in a bad light. (In both cases, a message of “and anyone who sees things differently is a monster” is often implied.) Such claims, however, are dangerous when used wrongly or in the wrong context—and, especially so, when used maliciously and/or as “thought-terminating clichés”.

A particular complication is that money, one way or another, is finite. Money spent on X is money not spent on Y and, in the case of the government, money not left with the tax-payers to make their lives better, to grow the economy, or whatnot.

This includes cases when X relates to human lives, because, if in doubt, Y might also relate to human lives.

For instance, spending some amount on research into one type of cancer might, in some sense, be a worthy expenditure—but what makes it more worthy than research into some other type of cancer? Or what if the same amount of money spent on some other type of medical research would save more lives? On some type of medical equipment? On vaccine doses?

For instance, what if spending a million dollars on ransoming one kidnapping victim implies that this money is not spent on saving two other lives for half that each? A hundred lives at ten grand each?

The fundamental truth is that even if we deem a human live worth more than any amount of money, we cannot escape the comparison of different lives. To say that “it is worth a million dollars to save that one kidnapping victim” is to also implicitly say “it is worth two [a hundred, whatnot] other lives to save that one kidnapping victim”.


Side-note:

And here, money can re-enter the picture in a more nuanced manner, in that money is not just a store of value in it self, but also serves in roles like unit of accounting and medium of exchange. We can, in a manner of speaking, say that ten grand is equivalent to one human life, because we (hypothetically) can save one life by spending ten grand, implying that a million dollars is equivalent to a hundred lives.

(Indeed, the idea of money as a store of value is disputable in these days of fiat money. It used to be that money made a good medium of exchange because it was a store of value, and intrinsic value at that, e.g. by containing some quantity of gold/silver/copper. Today, it is a medium of exchange by fiat, which indirectly gives it some value on the expectation that money received today can be spent tomorrow because others will be willing to accept the money in the expectation that it can be spent again the day after tomorrow, etc.—for some approximately constant and equivalent value of goods, services, or other whatnots, including, indeed, saving lives. Here, a complete reversal has taken place.)

This, of course, from one very specific perspective. Other perspectives can give very different results, as with the idea that “I wouldn’t take a million for a [girl/boy] like you” (note the old song with a matching name—written at a time when that million was worth many times what it is today).


A particular point is (as so often) “who decides”, specifically, what money is spent how and what lives are saved or prioritized over what other lives. The government is certainly rarely a good choice, in part, because its priorities need not match the tax-payers, be it in general or on an individual basis, in part, because of its lousy track record when it comes to using money efficiently and effectively, to avoiding waste, to ensuring that money goes where it is best needed and/or does the most good, etc. To the “most good”, note that there is no guarantee that whoever receives money for even a somewhat specific purpose, e.g. some type of medical research, is the best choice, or even a good choice, for that purpose. (And even assuming that this “somewhat specific purpose” is chosen sufficiently well.)

More often than not, money is best left with the individual tax-payer, who can always donate any surplus money to whatever causes he thinks worthy—including medical research. We also have complications like it being deeply problematic when the government uses the proverbial “other people’s money” to, say, save a single life for a million dollars instead of a hundred at ten grand each, while a private citizen who happens to be a millionaire can do so, e.g. to save his own life or that of a family member, because he is actually spending his own money.

Similar issues apply when it comes to e.g. humans on life support, needing long term and/or palliative care, and, of course, in triage situations:

Triage, usually without immediate monetary concerns, deals with very similar issues of priorities and decisions about who receives what care in what order and who might go entirely without care (be it because he expires too soon or because care would be almost pointless while risking lives more savable). The resource that is lacking is (likely) most typically medically qualified personnel, e.g. because a hospital is swamped after a disaster or during a war, and the entirely different uses of that resource are more limited, but the underlying issue is the same—a limited supply of something. Effects include that a physician might not help someone that he could help, because it is more important to help someone else. (To “different uses”: Money can be traded for almost anything, while e.g. the time of a physician currently at work in a hospital only has very few legitimate uses outside actual practice of medicine, e.g. administrative work.)

The other cases see complications like a potential conflict between, on the one side, patient and relatives, and, on the other, whoever foots the bill. We might then have an insurance company or an entity like the NHS refusing further payments/treatment in a perceived to be hopeless case or beyond some point of cost. To say who is in the right in any given case would require looking at these cases on their individual merits, and I make no blanket claim. However, I do note that similar concerns about the use of the limited resource of money to save lives apply—as, in the overlap with triage, might the question of who uses e.g. the limited number of “life-support machines” at any given time. Ultimately, this can be a strong argument against some types of insurance/health/whatnot schemes, including “single payer” and NHS-like constructs, and in favor of greater economic decision making by the citizens. (Indeed, I have heard of cases when the NHS has refused care even to those willing to pay out of pocket, where a private hospital would have willingly stepped in. To boot, there is the complication that insurance companies, regardless of field, are notorious for trying to weasel out of an obligation to pay. Off topic, additional economic factors might need consideration, e.g. that millionaires paying to keep someone on life-support can generate profits for the hospital that allow investment in more life-support machines.)


Side-note:

In a bigger picture, those too set on saving lives (in particular, with an eye at some specific threat, some specific lives, and/or lives here-and-now) are often illustrative of a more general problem of a single-minded focus on a particular goal doing more harm than good. Note e.g. the COVID-countermeasure era and how the wish to save lives specifically from COVID, and at any and all societal cost, did horrifying harm to the world—far, far more that COVID did or would have done without countermeasures. We must never forget that most issues have several-to-many aspects, interest groups, groups indirectly affected, and similar, to consider.



Side-note:

It is possible to increase the nominal money available virtually without limits, but what happens in real terms is another matter. For instance, if the money supply is increased in order to finance government spending, then the value of existing money will drop. The drop might come with a delay and need not be in exact proportion, but, at the end of the day, the increase comes at the cost of those already holding money at a certain nominal value and those earning money at a certain nominal value per time or work unit. Too large increases might even shrink the overall real value of the money supply or cause other problems, e.g. a negative shift in investment and consumption patterns or import/export patterns. (Other means to get more money, including borrowing and raising taxes come with their own complications and almost always do more harm than good in the long run, too.)



Side-note:

Numbers and examples above are for simple illustration and need not be realistic. In particular, chances are that a great many lives can be saved through very cheap and more indirect means than, say, a massive individual medical intervention or a gigantic search-and-rescue operation. Consider the costs of some basic childhood vaccines and the expectation value of lives saved. In a bigger picture, I restate an old thought-experiment of mine: What if COVID had been left alone and the economic damage done by the countermeasures had instead been done by additional taxes to finance cancer research? Chances are that the benefits from cancer research would have saved many, many more lives than COVID took and/or was prevented from taking. (Even aside from the other negative consequences, including on health, that the countermeasures caused.)

For further simplicity, I only speak in terms of lives above. A more in-depth discussion would have to consider other factors, including the remaining life expectancy of different persons, and for whom treatment/rescue/whatnot is how urgent. A particular complication is indirect choices between, say, 50 years of life gained for a single person (e.g. because of a successful cancer treatment) vs. 1 year lost for each of fifty persons (e.g. because higher taxes to finance cancer research pushed them over the border in old age). Likewise, choices between that one life here-and-now vs. a very small risk for a great number of invisible others—expose a million humans to a risk of death of one-in-a-million and we do have an expectation value of one death. (Also note ideas like the “forgotten man”, in its proper meaning, and how such ideas can easily explain why those fifty persons would lose that 1 year.)



Side-note:

From another perspective, we have issues like growth: Contrary to Keynesian teachings, higher taxes and more government spending tends to lead to less growth, which means less money to go around in the future, which means that the long-term budgets for e.g. medical research might be larger with less government-forced spending in the short term.

In particular, the thought experiment in the previous side-note should not be misconstrued as a support for higher taxes, as the net result would likely still be a major negative through reduced growth—and opportunity costs, misallocation, whatnot.


Rule of Rescue

The “Rule of Rescue” shows how wrong things can go if a too naive approach to matters like help (in general) and rescue (in particular) is taken.

According to Wikipediaw (some formatting has been lost or altered for technical reasons; some references have been removed; frequent language errors and ambiguities were as found on Wikipedia):

Ethics term for a specific questionably rational human response

The Rule of Rescue is a term coined by A.R. Jonsen in 1986 that is used in a variety of bioethics contexts:

  • ‘a perceived duty to save endangered life where possible’ (Bochner et al., 1994, pp901)

  • ‘the sense of immediate duty that people feel towards those who present themselves to a health service with a serious condition’ (Nord et al., 1995b, pp90)

  • ‘an ethical imperative to save individual lives even when money might be more efficiently spent to prevent deaths in the larger population’ (Doughety, 1993, pp1359)

  • ‘the powerful human proclivity to rescue a single identified endangered life, regardless of cost, at the expense of any nameless faces who will therefore be denied health care’ (Osborne and Evans, 1994, pp779)


Side-note:

Below claims with the reservation that these quotes are given without context and that my interpretation might have been different given the respective original context. This is not likely to be harmful to my big-picture goal of discussing issues like help, lives, and money (but might be so with another set of goals, e.g. with a core focus on the “Rule of Rescue” or the opinions expressed about it).


The last item shows a great parallel with my own writings in the previous section, while giving a pointer to the psychology behind mistakes in this regard. The preceding item might or might not do the same, depending on whether the implication is “an ethical imperative” or e.g. “a perceived ethical imperative”. The latter case matches; the former might exemplify the problematic attitudes of many and/or just take a position against undue Utilitarianism. (To expand on the last with an eye at the previous section: I do not, myself, argue that we have a duty to spend money in a manner that, in some sense, “maximizes life” or “minimizes death”—and certainly not with private money, as with millionaires above. I do contend, however, that we must always be aware of such trade-offs. Moreover, that those who spend the money of others must be doubly aware and prepared to justify their decisions towards those whose money were spent.)

The first two have a different angle in my eyes, and are mostly off topic for this text. To the first, I would reject such a duty, because (a) the ultimate consequences on the individual rescuer would be unconscionable, with no time, money, resources, whatnot left for anything else, (b) overall circumstances must be considered, including what risks/costs/whatnot are incurred by the rescuer and how worthy or unworthy the endangered life might be. (Should someone, e.g., be obliged to rescue a drowning Khamenei under circumstances that endanger his own life?) However, there are many constellations where a duty does exist between specific persons or groups of persons, e.g. in that a physician, within reasonable limits, has a duty to save his patients regardless of who they are. (And a lifeguard on duty would be obliged to save even a drowning Khamenei, even at some risk to his own life—but even someone far more worthy than Khamenei would not necessarily warrant a very large risk at a very small chance of success.) This can be particularly relevant if the quote was intended for a medical setting (without this being clear from the limited quote).


Side-note:

To illustrate both “reasonable limits” with regard to a physician and potential problems for individual rescuers in general: When should a physician stop working? After 40 hours a week? 60? 80? 100? There is work enough at some hospitals, but his own life might turn into hell, his family (should he have one) might be put in an unconscionable position, sooner or later he will wreck his own health, and with every additional hour of work the risk of errors increases—and at some point he will turn into a net negative even for his patients. And, yes, it might well be that by prioritizing sleep over one extra patient today, he can save two extra patients tomorrow (should his goal in life be to maximize the number of lives saved). Here it can make great sense to agree on some limit on effort (e.g. 40 hours for the physician) which is considered fulfilled duty (professionally, morally, whatnot), with an “above and beyond” applying after that, and, maybe, the imposition of an upper limit as harmful on a “the flesh is weak” basis—even should the “spirit be willing”. (This with some exceptions in detail, e.g. in that someone at 40 hours should not refuse to treat an unexpectedly crashing patient “because 40 hours”, but should handle the patient first, go home afterwards, and then take the corresponding time off the next week, file for over-time, or what else might apply.)

Or consider two young small-business owners: The one sells his business for whatever little it is worth, donates the proceeds to cancer research, and spends the rest of his life with the Peace Corps. The other grows his business, year by year, employee after employee, and donates increasing amounts of money to cancer research as time goes by—while keeping others in employment, paying taxes, satisfying customers, whatnot, which can result in further benefits, both directly and indirectly, in terms of even lives saved. The one might have dedicated his life to helping others, but it might well be that the other has done more good for them.


The second is similar in the more limited setting of health care and not truly relevant outside it. While I would affirm such a duty, with restrictions like what follows from my discussion of the first item and in the previous section, I do not see e.g. a duty to treat free of charge. (Where I take “people” to refer to e.g. staff. Another interpretation might point e.g. to “the public opinion sees a duty to treat for the staff of health service”, which is another matter entirely.)


Side-note:

Similar claims as with the above quotes apply to “Orr and Wolff” below, as even what I write critically does not necessarily apply to true opinions (on a deeper study than of just what is written on Wikipedia) but does, at a minimum, show mistakes potentially made by others and somewhat typical-seeming for certain writers on e.g. public health—and they are, therefore, illustrative in areas around help, lives, and/or money. (But I grant that portions of that discussion are not as on topic for the combination of lives and money, and/or the “Rule of Rescue”, as they ideally would have been.) Of course, the reason that Wikipedia includes a discussion of the paper at hand is that it criticizes the “Rule of Rescue”.

I also caution that the Wikipedia writing is poor, which could be a sign of deficiencies in other regards, including in relaying what Orr and Wolff’s actual opinions/whatnot are, and that I have not attempted to separate the Wikipedia editor(s) from Orr and Wolff.

Caveat lector.


Criticism

The Rule of Rescue is heavily attacked by Shepley Orr and Jonathan Wolff in their article “Reconciling cost-effectiveness with the rule of rescue: the institutional division of moral labour”. They argue the application of the rule leads to injustice and a suboptimal health outcome under the constraint of limited resources. They plead for strict application of cost-effectiveness analysis (QALYs) as solid base of decision making with priorities.

While a cost-effectiveness analysis can resolve some of the problems caused, it might well cause others. A particular issue is that we can all too easily see exactly the type of single-minded focus on a particular goal of which I warn above. Another is that issues like who is how deserving or undeserving are left out, e.g. in that it might be more cost-effective to save some individual chain smoker with a mild case of lung cancer than someone who has developed a worse case through no fault of his own. (But I stress that I speak of being medically [un]deserving here. An extension to others areas, e.g. for a Khamenei, is another matter, and I leave unstated if and when this might be justified. A rejection because “I do not like your political opinions”, as some hate-filled Leftist might push, is certainly not acceptable.) Depending on definitions and exact approach used, it might also implicitly rule out cost-offsetting and/or re-investable revenue from paying customers, leading to poor decisions for the long term.


Side-note:

More generally, what is optimal in the short term need not be so in the long term, and a too strict eye on costs, cost effectiveness, whatnot, here and now can be harmful. I cannot judge whether that is an issue with specifically Orr and Wolff, however.



Side-note:

Some specific phrasings might also give cause for pause. Most notably: (a) “strict application” of anything can be dangerous, and some amount of leeway, individual judgment, or similar, is usually for the best. (b) “injustice” is often too subjective or dependent on personal opinions, and it has often been high-jacked by Leftists to see a severe distortion of meaning by any “reasonable person” standard. (However, the unnegated “justice” might be a more common victim of the Left.)


In order to avoid framing doctors in an “inhuman role” of deciding at the patients’ bedside on the basis of cost effectiveness, they plead for “division of labour” between governments/institutions that allocate the resources on basis of cost effectiveness and doctors who try to save lives within given constraints, for which constraints they are not held personally responsible or liable.

Here, there are very great risks. The rationale with regard to the physicians (“doctors”) has merit, but the removal of decision making to the government and/or other bureaucracies is likely to be a very bad idea—as proved by so many prior cases of doing so. To boot, it introduces moral hazards. To boot, it de-empowers physicians. To boot, it is very likely to remove the patient, his priorities, and e.g. his willingness and ability to pay extra for extra service from the equation. Indeed, it is hard to see this system being practical outside a strongly “socialized” system, maybe an outright NHS—an idea that has a horrifyingly poor track record.

Orr and Wolff conclude: “The rule of rescue has a strong intuitive pull. It seems to express our common humanity, and to refuse a rescue on grounds of cost appears morally horrendous, even in cases that do not share all the paradigm features of the rule of rescue. Yet at the same time in a complex, resource-constrained world, cost-effectiveness cannot be ignored. The two types of reasoning appear irreconcilable. We believe, however, that this appearance is misleading, and ordinary processes of medical decision making show how to reconcile the two. Resource allocation decision making broadly follows cost-effectiveness analysis (CEA), while emergency room and related ‘bedside’ decision-making is much closer to rescue reasoning. There is good reason for this division of labour, although we have conceded that this simple picture does need to be modified to accommodate the different ways in which both styles of reasoning take place in both venues. Nevertheless, the key point remains: cost-effectiveness analysis is needed to decide which tools of rescue to provide. Rescue can then take place in a manner apparently unconstrained by cost.”

(No true comment for the time being. Parts are legitimate criticism of the “Rule of Rescue”. Parts are elaboration of Orr and Wolff’s own ideas.)

“Reconciling the rule of rescue with cost-effectiveness” is important during pandemics. Most states have ignored cost-effectiveness analysis in applying lockdowns and delaying regular medical interventions with the “Rule of rescue”-argument during the 2020-2021 Covid-19 pandemic.

Here we have some words of wisdom (partially, echoing my own). However, the main relevant failure was something different from not making a cost-effectiveness analysis in “[m]ost states”—an almost global failure to perform a cost–benefit one. (A criticism that applies more generally, but is particularly important here.) From the formulation, it is also seems that Orr and Wolff do have that single-minded focus on a particular goal, without considering e.g. the effects of COVID-countermeasures on the overall economy, civil rights, human happiness, whatnot—and might fall short even of a holistic medical view, by failing to consider medical effects outside e.g. “regular medical interventions”. (Consider a loss of fitness in the population, which can lead to a worsening of overall health, increased healthcare costs, premature deaths, and similar—among other examples.)

Orr and Wolff in 2014 profoundly argued that the “Rule of rescue” is the result of wrong reasoning. Cost-effectiveness reasoning with the aid of QALYs always leads to moral superior outcomes and optimal public health outcome, given constraints of resources and competing interests. In an unconstrained situation without conflicting interests, the rule of rescue leads to rightly perceived results, without causing (macro) problems.

This portion is highly problematic, beginning with the question of whether “wrong reasoning” or e.g. an unreasoning emotional and/or spontaneous reaction is the true cause.

To speak of “moral [sic] superior outcomes” is extremely dubious without having an agreed-upon moral framework, which we do not—and the more so, as it is preceded by “always”. For instance, using some amount of money to save a single lifeguard would be more morally worthwhile than using the same amount of money to save a hundred Khameneis in the eyes of many, among which I count myself, but this would not be reflected in a cost-effectiveness analysis, which, on the contrary, would favor the Khameneis. Doing so might be medically justifiable but is not automatically morally the better choice. A further problem is that it is not clear whether moral superiority is claimed relative just the “Rule of Rescue” (very dubious for reasons mentioned) or in general (inexcusably presumptuous and a possible sign of a hidden agenda and/or a very narrow-minded view of matters).


Side-note:

Here I deliberately use an example for easy illustration and compatibility with other portions of the text that need not be contrary to the intents of Orr and Wolff, because it might, in their conception, fall in the “bedside” rather than the “cost-effectiveness” area. (Wikipedia gives too little information to say for sure.)

Other examples can be given that are contrary to these intents, but I would need to actually review the paper to do so with certainty. However, to get some idea, consider if the government allocates more or less resources to abortion in a country like the U.S., and how very differently the morals of this could be viewed by the pro-life resp. pro-choice camps.



Side-note:

The issue of who is e.g. deserving/undeserving from a non-medical point of view is tricky, and I do not have an opinion set in stone. However, I would tend towards viewing medical practitioners as having a duty to some degree of agnosticism on non-medical aspects of a patient, exactly to avoid issues around “who decides” and the risk of abuse towards those with the “wrong” political opinions, of the “wrong” race/religion/whatnot, etc. (Where I view e.g. chain-smoking as a medical aspect for current purposes.)

That a hundred Khameneis are given precedence over one lifeguard might then be compatible with a narrow window of medical ethics, but is not so in a should-be-irrelevant-to-the-physician bigger ethical/moral/whatnot picture.

(I do not truly attempt to differ between “moral” and “ethical” in this text—in part, because I suspect that Orr and Wolff, and/or Wikipedia, has not either; in part, because the difference is usually subtle. I doubt that the overall analysis would change if I did, but I do not categorically rule it out, and chances are that I would have phrased myself entirely in terms of “ethics” and “ethical”, had I written from scratch.)


This the more so in light of an acknowledgment of “competing interests”, where the statement about “moral” virtually necessitates a judgment on the relative value of those interests—possibly, in the form of subordinating all other interests to some specific medical interest, e.g. saving as many lives as possible. However, even that forces a choice between medical interests: Are e.g. the number of lives more important than overall years lived, than overall years of quality life (as attempted, but hardly achieved, with QALYs; cf. excursion), and/or than happiness in life? Looking at years of life, for an easy example, many might view a single year of own quality life as better than two own years of pain—while many others might take the reverse view. Many might see pain as a happiness destroyer, while many others can be happy despite it, and many others yet be unhappy even without pain. Many might want to carry on for just one more year, despite unhappiness, e.g. to see some important event in the life of a child or grandchild. Many might see a single year of own life as more valuable than any number of years for someone else. Etc. What goals, then, should medicine pursue and with what legitimacy?

The claim of “optimal public health outcome” [sic] is almost certain to be wrong. Yes, there is a strong chance that it will beat out the “Rule of Rescue” most of the time, but even here the claim fails due to that preceding “always”. (If nothing else, a cost-effectiveness analysis cannot consistently give the correct answer about what is cost effective before matters have run their course—by which time the decisions will all already have been made. Worse, chances are that only one of the scenarios at hand will have run its course while the others were not simultaneously pursuable and remain speculative to at least some degree.) Looking at a bigger picture it fails badly, e.g. with an eye at failing to consider income and to make a cost–benefit analysis, as well, again, as matters of conflicting interests.

What at all is meant by “rightly perceived results” and, in this context, “(macro) problems” is unclear and the sentence pointless to discuss.


Side-note:

I might speculate that portions of the issue go back to naive modeling, in that a very simple model, with very simple constraints and whatnots, and with an assumption of perfect knowledge, has been used to test or analyze various simple decision-making criteria, and gave a certain result. If so, this is not uninteresting, but it certainly does not allow very far-going claims.

(To speak more deeply on that matter, I would need to read the paper at hand, which would be out of proportion relative the goals of elaboration and illustration that I follow with this text—goals that do not include an evaluation of the paper.)


Excursion on QALY[s]

A significant problem with Quality-Adjusted Life Years is exactly that they do not measure what the individual patients view as quality, nor take the patients’ priorities into consideration. Instead, they use a somewhat arbitrary measure of health as a proxy for quality. Perfectly healthy but unhappy and with no reason to live—full score. Perfectly happy but in rotten health—rotten score. Is a year away from curing cancer, and driven to complete the work at any cost, but in rotten health—rotten score. At a minimum, then, some other name and acronym should have been used, e.g. Health-Adjusted Life Years.


Side-note:

Where I use the somewhat rhetorical “score” to refer to the per year multiplier that indicates “quality”. The eventual QALY value is multiplier times number of years. While the inclusion of years, in it self, is easier to defend, it can result in a double punishment of the unhealthy, who, in many cases, have a shorter or considerably shorter remaining life expectancy.

That, say, someone with a life expectancy of another 5 years is, for some purposes and in a first approximation and/or all other factors equal, rated below someone with a life expectancy of another 10 years would be somewhat understandable—even to most of the lower rated. When multipliers of, say, 0.4 resp. 0.8 are tagged on and we now compare 2 and 8, this is harder to swallow.


From another angle, the use of QALYs amounts to telling the one that his life is worth less than that of someone else—often, that even an individual year of his life is worth less than a year of someone else’s.

On the upside, the idea is at least an attempt to provide a means for more nuanced choices, which can be helpful when lives have to be judged against each other, as they sometimes do, be it with or without a money angle. However, whether it is an improvement over just using remaining life expectancies, without a health or pseudo-quality adjustment, is debatable and/or dependent on circumstances. (A circumstance in which a case can be made is the common use to measure the value of a procedure, e.g. in that 2 years extra and a multiplier improvement from 0.3 to 0.4 might beat 2 years extra and a multiplier remaining at 0.3. However, even this becomes iffy if different numbers of years are in play and it still fails to consider the patients’ opinions, priorities, and whatnot—and if years are not in play, the full QALY would not be needed anyway.)

On a downside to the upside, it fails to consider societal costs in other directions, e.g. that two persons of equal health (in the sense of QALYs) might bring a different cost (e.g. because the one pays for his own care and the other does not, assuming similar issues; or because the one does not need care while the other does, assuming dissimilar issues). I would be hesitant about bringing in such concerns myself—it reeks of a Leftist dystopia and e.g. a Boxer-like situation (sent to the knacker when no longer able to work). However, it would be well in line with the type of thinking that otherwise is used by some “public health” rationalists and QALY fans. (And could be argued as yet another reasons to let the individual patient decide—not an anonymous bureaucracy or Fauci-like figure.)

Excursion on Prioritarianism

Following a Wikipedia link, I also found a page on Prioritarianismw, which I will treat much more briefly. (And with the reservation that a more in-depth explanation than provided by Wikipedia could give a different view of the idea.) Executive summary: This appears to be an extraordinarily naive variety of Leftist thinking.

To re-quote a quote from that page (emphasis added):

Prioritarianism holds that the moral value of achieving a benefit for an individual (or avoiding a loss) is greater, the greater the size of the benefit as measured by a well-being scale, and the greater, the lower the person’s level of well-being over the course of her life apart from receipt of this benefit.

The rough general idea might be justified (on average) by noting ideas like diminishing returns and how a benefit (e.g. a fish) can bring far more value when someone has a deficit (e.g. has not eaten for a full day) than when a surplus (e.g. coming off a hefty meal). However, the focus on well-being over an entire life reduces the relevance of this justification, as does the use of a well-being scale as opposed to a more specific-in-kind-and-need approach. (A below example illustrates both issues.)

Worse, however, is that the idea of the quote seems to be combined with a Utilitarian angle of maximizing the overall good, which makes things very iffy.

A central problem is a failure to consider own responsibility, own ability to fish (in a “give a man a fish; teach a man to fish” sense), who is deserving and undeserving, and similar. Side-effects of this include skewed incentives in that those who lazy about can be rewarded and those who work hard punished. (Such and other flaws mentioned in this section are, of course, quite common with Leftist thinking, politics, and similar.)

Another, the subjectiveness of well-being (and, beyond some survival level, true well-being is not necessarily strongly correlated with the “material”).

A third, the “over the course of her life” part, which is problematic in at least two regards (in addition to the aforementioned justification issue): Firstly, what this sums to is only knowable at death, at which time it is too late to act. (Even discounting practical problems of calculation over an entire lifetime.) Secondly, the positive that arises from a benefit might depend on factors like how long someone has to live. (Imagine e.g. giving someone fishing equipment and instruction on how to fish at age 20 resp. 80 in two alternate realities, and with what likelihood the decision can be justified relative giving the same to someone else from a perspective of “regular” Utilitarianism.)

A fourth, that the value of something might differ between different recipients (even age aside), e.g. in that a master harpist, down on his luck and without a harp, might earn a good living over many years from being given a harp, while a fisherman might be best off selling it for extra pocket money—at a small fraction of the monetary value that the harpist would have gained.

(Other problems are likely to exist. I am basically typing of the top off my head.)

At an extreme, Prioritarianism could justify the daily decision to give a perfectly capable fisherman a fish a day throughout his adult life, simply because he relied upon that fish and deliberately abstained from fishing.

While that example does not rely on a lifetime measure (it would equally work with a series of here-and-now decisions), lifetime measures can lead to other absurdities: Take, e.g., the question of how to distribute fish between that down-on-his-luck master harpist, who is currently starving through lack of work, and a permanent beggar. Well, chances are that the beggar has the worse lifetime sum, so he gets the first fish—and that might not be wrong from other perspectives either, because he just might have been starving too. But it is the same with the second fish, and the third fish, which the beggar cannot even manage to eat here-and-now. (And so on, until there is no more fish.) Well, maybe he can sell the fish for a profit, but how does that compare to the needs of the starving harpist next to him? (Who, given the circumstances, is unlikely to have anything beyond the clothes on his back to offer in exchange for the fish.) In the end, the starving harpist might have to rely on a non-Prioritarian piece of charity from the beggar, which would cast severe doubts on the value of Prioritarianism.

Excursion on money and venality, money and sacrifice, etc.

The aforementioned idea of “I wouldn’t take a million [...]” expresses a strong sentiment of putting something else above money, more commonly and more generally expressed by “Not even for a million dollars!” (or some very similar phrasing).


Side-note:

Such claims are often metaphorical or hyperbole, and should be taken with a corresponding grain of salt. (And even when the claim is intended more literally, the exact amount of “a million dollars” is usually representative—as it is in the below discussion.)

The general idea discussed below still holds.


However, an implicit assumption here is that the million dollars would be used for one’s own benefit, which is short sighted. Let us consider two alternate realities: In the first, someone is offered a million dollars to perform some greatly distasteful task and turns it down, because “Not even for a million dollars!”. In the second, the distasteful task is the price for saving the life of a stranger, and the offer is selflessly accepted. But we have already seen that a million dollars might be enough to save a life—so why not, in the first reality, perform the task, take the money, and save that life using the money? (Or two lives, or a hundred lives, or whatever might be relevant. Cf. above.)

A Utilitarian might even argue that we have a moral duty to accept such million-dollar offers and then to use the money for the common good, be it by saving lives or in some other form. (This with reservations for the exact nature of the task. For instance, to gain that million dollars by robbing a charity of the same amount might be contrary to Utilitarian calculations.)

Excursion on willingness to help vs. self-sufficiency

Recurring themes above include that someone requesting help should have given it an attempt of his own first, that some might need or deserve help while others do not, and similar.

An interesting question is how self-sufficiency in the (prospective) helper might correlate with attitudes towards the helpee in such regards (in particular; maybe, in other regards too). Most notably, chances are that someone who has a “I will try it first, myself” attitude to his own problems will be more reluctant to help someone with a lazier attitude than someone who shares that lazy attitude would be—and repeated experiences of “if I try hard enough, I will succeed”, “if I persevere, this too shall pass”, and similar coming true, could go a long way to reduce the tolerance of laziness in others, while repeated experiences of “it was easier than I expected” could go longer. Ditto, repeated experiences of “I asked for and received help, myself, but the helper actually slowed me down”. (Where I, for brevity, take “lazy” and its variations to include e.g. “is afraid to try”, “sees a problem as a networking opportunity”, and similar, on top of the conventional meanings.)

If helper and helpee are sufficiently close in ability and/or the helpee is sufficiently capable of the task (if need be, after having read the manual or whatever might apply), such reluctance is usually a good thing. If they are not, there is a risk of “under-helping”. For instance, a less intelligent helpee might require far greater efforts to solve a problem than the helper expected based on himself—or fail at any level of effort. For instance, a helpee who is a beginner in a field might lack domain knowledge that a more experienced helper takes for granted, and have correspondingly higher hurdles to overcome to solve a problem. The above incident with the young woman who wanted help with her bag might (my memory is too vague to say for certain) be a special case, and does provide good illustration of principle: I am tall enough that I only very rarely have problems with reaching shelves and whatnots that are designed to be reached by unaided humans (as opposed to e.g. humans-on-ladders) and it might not have occurred to me, in the moment, that even just reaching could have been a problem for the young woman.


Side-note:

To boot, various hurdles can be higher today than they were in the past in at least some fields. For instance, a software developer of 2025 will often have more tools, more and more complex APIs, and whatnot to master than one in 2000, Maybe worse, the risk that the next project or the next employer will use a different set of tools/APIs/whatnot is higher.



Side-note:

In the other direction, a helper who underestimates the helpee might offer help too soon or when it is not needed, which can create an impression of undue condescension, put the helpee on the wrong track in terms of personal (and, in the workplace, personnel) development, or otherwise be counterproductive.

(This, especially, in that help can be offered before it is even requested—something best saved for cases when the helpee struggles for so long that a problem in a bigger picture arises, e.g. in that further delays in the success of the helpee would threaten the overall schedule of a team.)


Excursion on misguided and/or premature help

The preceding excursion brings to mind a personal anecdote of misguided and/or premature help gone wrong, and where the (real or apparent) need for help arose largely through new circumstances:

My original college/university studies broadly fell into two phases—my time in Stockholm, Sweden, as a regular student and my time in Darmstadt, Germany, as an exchange student. In Sweden, to my recollection, I had a grand-total of one oral examination (not counting visits to the dentist). This consisted of the professor handing me a list of some problems to mull over for, maybe, 20 minutes and an ensuing discussion of said problems (passed with, in U.S. terms, an “A”). In Germany, these were much more frequent, and my first time around consisted of two consecutive tests for math courses held by a professor-to-be respectively the full professor who mentored him.


Side-note:

To what degree the difference in frequency of oral examinations goes back to differences between countries, between universities, between fields, or simply relates to the level of study (my exchange studies were roughly the last third in terms of time), is hard to say for certain. I suspect that it was a mixture of such factors, but my experiences with further studies in Germany, at a different university, make me believe that “countries” were quite important. An implication of that is that the professor-to-be might have expected me to have considerably more experience with oral examinations than I actually did.

As for “professor-to-be”, the differences in system between e.g. Germany and the U.S. and the long time makes it hard for me to give a more standard term. (For instance, the German “Professor” is typically only applied to what in the U.S. is called a “full professor” in my impression.) As is, he had his own courses, but was, to some degree, supervised. (Maybe, the reason why the two examinations came together. This has otherwise been an unusual constellation in my experiences.) In terms of qualifications, I wish to recall that he completed his “Habilitation” during my days in Darmstadt. (The Habilitation is a prerequisite for consideration for full professorship in Germany, and can, in a first approximation, be viewed as a higher doctorate in U.S. terms.)


Events now unfolded very differently from both a written examination and that one Swedish oral examination, in the form of questions/problems being asked/given and answered/solved in a dialogue. The very first question posed by the professor-to-be was something that I could not answer off the top of my head, and I proceeded, still with some optimism, to do what I usually did in such cases—try to derive the answer based on what I did know. (Note how this can be more plausible while in front of pen and paper, with no-one waiting for an answer, and with an easy option of just solving other problems first and returning to the “problematic problem” once the others were done.) As I thought out loud, instead of immediately giving an answer, the professor-to-be offered a hint in another direction, I abandoned my original train of thought to follow his hint, again thought out loud while trying to derive an answer, he offered a second hint in yet another direction, and I changed tacks again, he offered a third hint in now a fourth direction, and I changed tacks again. After some short while of further thinking out loud from me, he interrupted me to move on to the next question.


Side-note:

For the below, note that I do not blame the professor-to-be. Had I been more experienced with oral examinations, I would simply have said something like “I do not know the answer off the top of my head. Could we come back to that question later?” and things would have worked out better—even had I later failed at the postponed attempt at that first problem. (Likewise, had I been more experienced, the situation would have included less nervousness and stress, which would have increased the chance that my attempts were successful, be it at all or in a timely manner. And, no, by means do I guarantee that I would have found a solution in time without the not very helpful attempts to help.)

There is also a considerable chance that the professor-to-be was not only used to students with more experience of oral examinations but also to students who either had an answer in short order or did not have an answer at all. (And/or that he, himself, was not yet very experienced with being the examiner and had different expectations that he would five years later.) Few students seem able (or willing?) to even attempt to find an answer based on thinking. For instance, in a later oral examination (years later, for my second master), I was asked about a simple formula. I had not memorized the formula (this time, very deliberately), because it was very easy to derive (and the derivation, more high-school than master level, required no memorization). Consequently, I proceeded to derive the formula—and was met with (in an approximate paraphrase). “No, no, no. You do not have to derive it. Just giving the formula is enough.”—as if I had misunderstood the question or been going for “extra credit”. Apparently, then, students were supposed to blindly memorize the formula. (The odder-to-me as the implications of a formula are usually better understood by derivation than by memorization.)


A few other questions followed and were answered. The second examination, led by the (full) professor, followed and was completed. I was asked to wait outside the room for a few minutes, while the two conferred. After these few minutes, I was given the grades “C”, likely, for the first and “A” for the second course. (In approximate translation to U.S. terms.) An explicit complaint by the professor-to-be was that not only had I failed at the first question, the first question had cost so much time that he had been forced to skip a few other questions that he had wanted to ask. (With the conceivable interpretation that a “B” would have been in the cards, had I been able to answer those unasked questions satisfactorily.)

But here we see that his attempts to help had not been helpful. If he had let me continue on my original road, I would likely have stood a better chance at finding the right answer and, failing that, the point where it made sense to move on to the next question would have been reached the sooner, increasing the amount of time available for further questions.


Side-note:

As a counterpoint, showing that the success of help can depend on both the helpee and the circumstances, I cannot rule out that the same prompts would have helped someone else along—especially, if they were less of a “help the student trying to derive the answer find the right road” and more of a “help the student who has memorized the answer find it” (cf. an earlier mention of Santa’s reindeer).

Indeed, as occurs to me during writing, there is a possibility that this was the reason for his repeated attempts to help me. (But the long time passed makes it impossible to do more than speculate.) He might have expected a memorized answer, seen me attempt a derivation, tried to prompt me with a metaphorical “V” (for “Vixen”), seen me attempt a derivation (if in another direction), tried to prompt me with a metaphorical “female fox”, seen me attempt a derivation, etc. At an extreme, assuming that he expected a memorized answer and was unused to students deriving results, he might not even have realized that I was attempting to do so, and mistaken my thinking out loud for an attempt to dissemble or to gain time.



Side-note:

The above is also an illustration of how good or bad luck can affect who needs, or appears to need, help. Here, I might have been in a much better situation, had the first question been asked last, without any change to the actual questions individually or to my level of mastery. It might even have been that having the two examinations in another order would have helped (I would have been more acquainted with the format and might have been less nervous; the professor-to-be might already have built a more favorable impression of me, which could have made him choose another approach when I could not answer from memory).


Excursion on a reduced willingness to help

Looking at myself and society at large, there is a definite resp. potential long-term trend towards being less willing to help.

For instance, when I was new in Germany (age 22) I encountered more beggars than in Sweden and usually gave some small amount when asked—today, I do not. One reason is a greater awareness of topics like worthiness (does someone beg because he cannot get work or because he is unwilling to work), the risk of rewarding bad habits or financing someone’s alcohol consumption, and use of beggars to collect money for gangs (cf. side-note). Another is that few beggars actually have a legitimate reason to beg in light of the extensive social protections in Germany (for which I already pay through my taxes)—those who do not receive adequate money through such means typically either have deliberately forgone the application or have been thrown out for repeated refusals of conscionable work offers. (Another category might be illegal aliens, but they are rarer in Germany than in the U.S., most beggars that I have so far encountered have both spoken native-level German and looked like natives, and it could be argued as outright unethical to give illegal immigrants such support.) A third is a common lack of gratitude and an apparent view of the potential giver as a mark, which has grown more obvious to me over the years—and which, for that matter, is not limited to beggars.


Side-note:

However, I do not remember the last time that I received a request. Over the last few years, I have occasionally seen someone passively sitting next to a bowl, but the “cold approaches” of yore do not seem to happen. This might be coincidence or relate to where in Germany I have lived at various times; however, it is also conceivable, say, that there simply are fewer beggars today (courtesy of economic growth; maybe, also factors like a reduced willingness to give) or that I am viewed as a less gullible target today than I was back then (with a corresponding reduction in the likelihood that I, as opposed to some 22 y.o., am approached).

Interestingly, I did encounter at least two stories of begging as an outright lucrative endeavor long before I came to Germany, which did not affect my habits, but which can serve as warnings of another type. (But with the disclaimer that they are fiction and might not reflect the realistically possible.) The more significant is the “Sherlock Holmes” story “The Man with the Twisted Lip”, which deals with a journalist who goes undercover as a beggar, finds surprising profits, and decides to lead a double life of beggar at work and loving and well-earning husband at home. (As a result of which he becomes the main suspect in his own alleged murder.) The lesser is a Stephen King story with a similar premise.



Side-note:

A very illustrative potential case of poor attitudes occurred quite early, but at the time I viewed it as an individual asshole and/or as e.g. honest forgetfulness or other mistake: I was approached by a beggar, we spent several minutes (!) talking, and I gave him some coins. A few days later, I ran into him again, gave him a friendly greeting in passing—and was completely ignored. (Presumably, he was “off the clock”.)


Addendum:

After publication, I am struck with by the thought that some Leftist might want to turn this into a weird argument about “privilege” or “buying [something or other]”. This might be a misguided fear on my behalf, but just in case:

Such a behavior is rude even absent any prior transfer of money: Even with a (perceived) complete stranger, it is often best to err on the side of returning a greeting (both because the perception might be wrong and because one might meet at a later time). With an actual conversation just a few days back, however, not returning the greeting is a severe faux pas. No, this conversation was not a deep, hour-long, life-altering exchange (far from it)—but it was well beyond the type of brief transaction involved with a normal request for money or, e.g., two sentences exchanged with a cashier in a store.

That money had been transferred makes the matter ruder and, in conjunction with the rudeness, demonstrates a negative mentality.




Side-note:

Whether gangs and beggars are a real issue in Germany, I do not know. At a minimum, I have heard claims of individual families (likely, Roma or similar) that send out their children to beg for the family or to sweep through a train with begging (and, maybe, some pickpocketing), and it does not hurt to be aware of the possibility.

In some other countries, however, this is (or was) not just an issue but potentially a source of horrifying scenarios. For instance, I once read a book (“Jæger”, Thomas Rathsack) on the experiences of a former Danish “special ops” (or whatnot) soldier, which included the encounter with a crippled old man in a wheelchair, on a road miles from civilization, in Afghanistan. The explanation given by an expert on the area was that such scenes go back to an utterly barbaric practice of deliberately crippling children and using them for commercialized begging, until such a point that they no longer bring profit. (By implication, this man had gone from childhood to, at least the impression of, old age in such a state. While I cannot vouch for the correctness of the book, I have heard similar claims from other sources.)

In a next step, such scenarios put the potential giver-of-alms in a harsh moral dilemma (if he is aware of the trick and does not just naively give out of sympathy): If he does not give, it might mean the death of the old man, because he is no longer profitable. If he does give, it not only rewards these inhuman criminals but also increases the risk that the scheme will be continued with the crippling of more of the children of today.


Looking at gratitude (etc.) more generally: Firstly, an “is a mark” mentality is very common whenever money is involved, including (off topic for this page) various B2C relationships. Secondly, help/favors/gifts/whatnot (I will stick with “help”) in general seem more likely to create an expectation of further help rather than gratitude and reciprocal acts. (A problem of which I am not innocent, myself. At least in younger years and at least towards parents and other adult relatives, I tended to take help for granted and not show sufficient gratitude and reciprocity.)

Some women, in particular, seem to have an entitlement attitude towards men, in that anything hard to do should be handed over to the nearest man, who, in turn, is supposed to be happy to help.


Side-note:

Both the issue of me vs. my parents and women vs. men might go back to an ease of getting help. In my case, there were instances when a task needed doing and my mother “hi-jacked” the task before I even had time to consider my options—let alone ask for help. (In some cases, notably with cleaning, before I even was aware that there was a task to perform. Only as an adult, with a home of my own, have I understood how much work my mother did outside my then awareness.)

Many women might similarly have found that if they ask a man for help, he is likely to help, making the asking all the more attractive in the future.

A confounding factor with women, however, is that some of them might use dishonest requests for help to, say, attempt a romantic contact. How common this is in real life is hard to tell without mind reading, but it certainly is common in fiction and similar excuse making, more generally, is somewhat common even in real life.


Other factors that limit my own willingness to help are to some degree discussed in other parts of this text, including that I, unlike at 22, want to see someone give a problem a solid own try before asking for help—the more so, as that solid own try could remove the need for help. (And if it comes to help, I might be less likely to actually perform the work and more likely to explain how to perform the work. Depending on the nature of the problem, I might even limit myself to e.g. explaining how to find out how to perform the work.)

Looking at society at large, chances are that there is a strong connection with a diminished familiarity, e.g. in that two somewhat random villagers (from the same village, of course) might have known each other for decades, while two random big-city folks (even if from the same city) might never have met before and might never meet again. Such familiarity does not only bring a greater chance of natural sympathies but also a security in that each villager has a reasonable idea of who is or is not worthy of help, is or is not likely to reciprocate if the tables are reversed at a later time, and similar, which allows for a more informed decision of whether to help. This likely compounded by the fact that the villagers know that e.g. a failure to help and/or to reciprocate might be noted by other villagers, which can increase the likelihood of help resp. reciprocation (while big-city folks have no such pressure)—and if the big-city folks never meet again, any help rendered will remain one-sided.

However, a major factor might be the increasing move towards high taxes and large handouts. It is, for instance, the easier to give larger amounts of money to charity the less of one’s earnings are taken by the government (and note that this is not limited to income tax, but also includes e.g. employer fees, VAT, and, more indirectly, inflation). In another direction, why should someone give to e.g. “the poor” when “the poor” (a) have an obesity problem, (b) already receive considerable governmental help—paid by these taxes. (Also note an example around beggars above.) Then we have the issue of gratitude vs. entitlement again: Giving is easier when gratitude follows, but when the government goes in between, any gratitude goes to the government and/or the politicians who have pushed for higher taxes and larger handouts, rather than to those who actually pay. At the same time, a stream of government handouts in combination with Leftist propaganda leads to an entitlement attitude. Indeed, it is quite common that those being fed on someone else’s dime complain that they are not being given enough, that those whose money is stolen for financing should have even more money stolen, and/or that the latter would be greedy bastards because they do not voluntarily give more of their disposable income on top of what has already been stolen.


Side-note:

Off topic, government interventions usually come with a row of other problems, including inefficiencies, poor incentives, and a detachment of help from worthiness. To the last: Government aid might or might not come with an adequate check for need (e.g. in that someone has no money), but such checks are notoriously poor at stopping those who live off the government as a deliberate strategy, those who first receive aid for a legitimate reason but are too lazy to move off aid, and similar. In Germany, e.g., calls for a lesser right for “ALG II” recipients to turn down job offers are usually met with loud shrieking from both the Left and said recipients.

Likewise, it rarely considers whether someone is in a certain situation through bad luck or through wasteful and irresponsible living. On the contrary, it can outright encourage wasteful and irresponsible living, because the prudent and the imprudent might ultimately end up with the same through government aid, while the imprudent, unlike the prudent, has had the benefit of his wasteful spending on quality of life, entertainment, things bought, whatnot, before his money ran out.

Even the check for need is often inadequate. A particular problem is that more money might be awarded than is reasonably needed, because the point is no longer to keep someone fed, clothed, and housed, but to guarantee a comparatively high standard of living. (Thereby, of course, further reducing incentives to e.g. cut unnecessary costs and put more efforts into finding a job.)



Side-note:

It is also conceivable that charity is trickier today than in the past for other reasons, including that charitable causes are often run by organizations with one or several layers between giver and receiver, which removes both the gratitude angle and the can-judge-who-is-worthy angle. (To boot, layers that swallow much of the money on the way and introduce other government-style problems.)


Tipping is an interesting example, especially, with an eye at the U.S. and the years after the COVID-countermeasure era:

I have repeatedly seen complaints, including in online newspapers, of excessive tipping demands, while (a) prices in e.g. restaurants have sky-rocketed, (b) many customers are on a tighter budget than before. If the willingness to tip is reduced, this has natural and legitimate reasons. Not to forget, if someone does tip the apparent default of 20 percent (!) to the cashiers (!) in self-service (!) restaurants, this implies that there is less money left for the waitresses in sit-down restaurants. (For my part, I have always tipped well by German standards, but, these days, am so rarely in an establishment that expects tipping that I have little to add from own experiences.)

Nevertheless, there seems to be strong attempts to guilt customers into tipping, e.g. by pushing an “If you don’t tip, the poor staff does not have a ‘living wage’!!!” angle. Firstly, this violates the idea behind tipping, that those who provide good service and otherwise do a good job receive tips and those who do not perform do not receive tips. (If typically with a gradation rather than a yes/no division.) This both in that even poor performance, rudeness, and similar are no longer supposed to be a valid reason not to tip well (absurd!) and that those who do not have tip-worthy job (e.g. because they just ring up an order and take payment, while the customer does all the work) are now supposed to be tipped regardless (even more absurd!). Secondly, if someone earns too little, this is not the customer’s problem—it is, and must be, a matter between employer and employee. Ask for a raise, go on a strike, quit and take another job—or be content with what one has. Those are legitimate options. Trying to guilt a customer is not a legitimate option. (Ditto, m.m., e.g. politicians and unions who play the guilt angle, which might actually be more common than with the staff.) Thirdly, the idea of a “living wage” is a crock of shit—payment should follow market forces and value provided, not a need or wish detached from value; not everyone has the same living requirements; and not everyone actually works any given job for a full “living”, say, because the job was taken to earn a side-income.


Side-note:

As far as I am concerned, those who bring “living wage” (pseudo-)arguments can not and must not be taken seriously and should be viewed as the idiots and/or cheap propagandists that they are.


Another interesting example family consists of cases of help that will not be reciprocated through different life choices, variations in customs over time and geography, bad luck, or similar. (As opposed to help that is not reciprocated because the recipient is unwilling to reciprocate.) For instance, in my early years in the workforce there were occasional collections for some colleague or other who was getting married. I gave some money (likely, even, an over-average amount), without thinking too deeply on the matter—“its nice to be nice” and someday it will be my turn, which will make things even out. However:


Side-note:

As with beggars, I cannot recall the last time such a collection took place. It might be coincidence, it might be a change in usual office habits over time, it might be a geographical variation in German customs, it might be a side-effect of more recent colleagues having been older, or some other explanation yet.

I would like to think, however, that others have simply come to the realization that such collections are rarely a good idea in today’s Germany.

Keep in mind that these collections were a somewhat blanket affair and that other gifts might have been given in parallel and on a more individual and personal basis, including when someone was a wedding guest on top of being a colleague.


  1. The idea of “someday it will be my turn” is simplistic:

    By now, I have hit 50, am still not married, and pretty much assume that I never will marry—as with many others. If in doubt, these collections were always for marriage, never for e.g. a moving-in-together.

    If I had gotten married, it might well have been during one of my lengthy sabbaticals, implying that there would have been no colleagues to mooch off. Others are less likely to have sabbaticals, true, but there are many who have had times of unemployment or other absences from the workforce, with the same effect. Others yet, these days, might marry while in retirement (be it through old age or e.g. successful investments).

    Likewise, it might have been during a period of freelance work, implying another (and often shorter) relationship to the colleagues in the current office than with a regular employment. Should I then pester these temporary colleagues with requests for money? Hardly. Unlike sabbatical takers, freelancers are quite common in Germany—and then there are those who go into business for themselves in other forms.


    Side-note:

    I do not remember the exact modalities around these collections, but chances are that they are/were not initiated by the groom/bride to be but by some close colleague, HR, or similar. The same problem applies, however, in that the likelihood that someone starts a collection is naturally smaller for a freelancer with a potentially very temporary presence. (My longest relationship with an employer as a freelancer might have stretched to something like seven years, but this was in three (?) separate and non-contiguous engagements totalling, maybe, four years—and there had been a considerable turnover among the regular employees during those seven years.)


    Then we have the issue of size of employer: The employer that I associate most strongly with such collections began at (maybe) 30-something employees at my entry and exploded upwards to 300-something during the IT bubble at the turn of the millennium. Say, for the sake of easy numbers, that we have 30 resp. 300 colleagues who actually contribute, and that everyone gives 5 Euro. (These 5 Euro might not be realistic, but the amount matches the smallest current bill, making it a likely lower limit for individual contributions in a collection today. To boot, scaling for other average amounts is easy.) This is then 150 resp 1500 Euro, depending just on “when”. I left for another business which had maybe two or three dozen employees at my location (more at other locations)—and would then potentially have dropped from 1500 back to 150 Euro. Then the IT bubble burst, and we were down to a dozen or so—for, maybe, 60 Euro. Later employers have varied in size, but chances are that I would never have had a shot at money from 300 colleagues again, at least after factoring in limitations to my respective location and/or department. (But, in all fairness, my own outlays over the sum of all collections is also well short of 1500 Euro, let alone what might have been the case with a larger average donation per colleague.)

    Then the willingness of the colleagues at hand to pay: Depending on factors like age and qualification level of colleagues, exact business field, the current state of the economy, whatnot, different levels of contributions are possible—and someone not very popular might receive less than someone very popular even when the colleagues have the same “purchasing power”.

    Oh, and if collections have fallen out of fashion, if I am in the wrong part of Germany, whatnot (cf. above side-note), I am screwed anyway.

  2. Even if I had received money back, it would likely not have been from the same colleagues. Yes, we could see it as a pay-it-forward or -backward system (with the major reservations resulting from the previous item), but in older days it would largely have been the same colleagues, with differences resulting as with the above contrast between villagers and city folks.

    As for pay-it-forward systems, I have great doubts that they will work well outside the trivial (and, maybe, original) setting of paying someone else’s meal in advance and other similarly trivial cases. The risks of cheating and other unfairness (cf. e.g. the previous item) are simply too large.

    But, worse, pay-it-forward only truly applies to those already married, who have received a benefit in the past and are now passing it on in the hope that the recipient will do the same at a later date. For the unmarried, it is more of a pay-it-backward system, where they give in the now in the hope of, themselves, receiving in the future, in something resembling a pyramid scheme or, with an eye at the accumulating own contributions, something halfway between a pyramid scheme and a lottery.

  3. Looking more at previous topics:

    The idea behind such collections is that the young couple will have a slew of early expenses and can need every help that it can get—and this was often a very valid point in a sufficiently distant past. Today or back around the turn of the millennium? Even that the couple is actually young is far from given... More to the point:

    There is more wealth around today, implying that the couple is likely to have more money than past couples on average.

    A couple of today is less likely to move directly from the respective parents and more likely to join two existing single households—or to just formalize a prior cohabitation. (Consider e.g. needing or not needing furniture and whatnot.)

    In as far as new purchases are needed, those cost conscious can get by great on IKEA furniture and whatnot for proportionally less or far less than what their grandparents paid for equivalent items.

    Etc.


    Side-note:

    Other aspects of life could be worse than in the past, however. House prices in attractive areas, e.g., can be very high, but the typical first step for newly weds is a rented apartment—not a purchased house.

    From that point of view, a “we want to buy a house” collection might make more sense than a “we want to get married” collection, while being absurd in a more obvious manner.


    From another angle, we have various types of government support (e.g. tax breaks), which not only lower the need for help even further, but now actually cause an unfairness towards others, who are supposed to pay a portion of the newly weds’ life over the tax bill and also give in the collection. (And chances are that the effect over the tax bill is much, much larger on average.) Other helpers include banks that are far more likely to give a loan to, say, a modern two-income family than one a few generations back—if that family even had two incomes. (And two incomes does not just mean more money relative the past, if likely well short of a factor of two, but also that there is likely to be two collections instead of one. Well, unless they both work for the same employer, which could raise interesting issues on how to handle that matter.)

    From yet another, considering some modern wedding nonsense: Would the money collected actually go to help the newly wed with overcoming an early obstacle in their married life—or would it be used to finance some nice-to-have for the wedding that the couple could have done without? (In the latter case, turning the collection into a gift for a florist, a wedding planner, a caterer, whatnot.)

Whether and how much I will give, should such a collection come my way again, I do not know—but I do know that I will view the collection with far greater scepticism than in my youth. It seems very plausible that it is the same with others. (I might even chose to turn the collection down with a link to this page...)

Excursion on the 59 (Resident Alien)

Having caught up with the last season of “Resident Alien”, I am struck by how “the 59” exemplify much idiocy around help and attitudes to help:

As established early in the series, “the 59” were a group of 59 miners who (many, many years before the events of the series) had gone to the rescue of a single fellow miner, trapped through a partial collapse of the mine—and had all perished without achieving their goal. Even in the now of the series, however, the locals (or some of them, at any rate) celebrated this event.

This is all bad enough, but the last season has a flashback depicting the failed rescue. It is now revealed that the disastrous outcomes were brought about by the sheer number of miners—they were too many and the weakened mine could not take their weight. To boot, this appears to have been somewhat predictable, as the trapped miner immediately warned them that they were too many, when he found out that more than just several rescuers had come. Death and destruction came because of incompetence in rendering help.


Side-note:

While I cannot speak to the realism of the number of rescuers vs. the increased risk of collapse, it is clear, at a minimum, that it was foolhardy in the extreme to move in with many more helpers than needed, in light of the risk of further collapse—even should this risk not have been increased beyond its original level.


Excursion on pseudo-helpful features and human mentality

For the better part, I have found that when various softwares, websites, whatnot, try to be actively helpful, they usually do more harm than good. Likewise that, even absent active attempts to be helpful, many features do more harm than good.

Two interesting points:

  1. Whether the contrast between my poor experiences and the great number of such unhelpful “help”/features/whatnot might partially go back to a difference in factors like degree of self-sufficiency, need or non-need to be led by the hand, relative willingness to think for oneself, and similar, between me and the majority of the users/customers (or, worse, product managers and other decision makers).

    If so, this might tell us something about the dysfunctional societies of today and the reliance on government help, the aversion to take own responsibility, etc., that plague them.


    Side-note:

    However, other explanations are very likely to exist, including that more features can allow for product descriptions more likely to fool the unwary into a poor purchase and that many naive businesses equate usability with “my grandmother could use it”, while failing to consider more advanced users entirely.


  2. Whether the presence of such unhelpful “help”/features/whatnot could worsen these mentalities, e.g. by furthering a “Don’t make me think!” attitude or by creating an expectation that this-or-that should just be handed over on a silver platter.

    (A similar issue is often cited with AI, e.g. in that modern students might become too reliant on AI and too unwilling to and incapable of doing work on their own. AI, however, is off topic unless some software tries to shove it down the throat of an uninterested user.)

An easily understood example is various forms of autocompletion and spelling correctors, e.g. for the input functions of a smartphone, which give so many errors that even the average user seems to see them as more harmful than beneficial (and I almost always deactivate them). If in doubt, the potential consequences of such errors do not just include more work (to correct errors) than is saved (when the smartphone/whatnot does things correctly) but also the risk of horrible misunderstandings (e.g. through an unfortunate text message) and other consequences. At the same time, any potential gains depend strongly on the proficiency of the user in terms of e.g. typing speed and own ability to spell correctly, implying that the advantages could be larger than the disadvantages for the very weak, even while doing more harm than good for even the middle of the pack—and even while foisted upon everyone, regardless of proficiency.


Side-note:

However, some features are so extraordinarily idiotic that it is hard to find a justification at all, or where one might speculate that someone had an idea (which, in it self, might or might not have been beneficial) and that someone else misimplemented that idea in an inexcusable manner.

A good example, and a simultaneous illustration that the problems are not limited to software, is a CD changer that bought around 2000: There were three CD holders covered by a single glass protector. This glass protector could easily have been moved up and down (to allow access to the CD holders resp. to restore protection) by hand. In fact, this is the obvious and almost obviously ideal solution. Instead, the glass protector was moved by a slow motor, implying that the user had to press a button (or some such) to activate the motor, which then slowly moved the glass protector, leading to a disproportionate wait to perform the trivial step of removing a CD and replacing it with another. Worse, once the protector reached the fully opened position, the motor immediately reversed and began to restore the protector—making it impossible to exchange more than one CD at a time. Indeed, even exchanging a single CD in one step was tricky, even with the slowness of the motor, because the original CD could not be removed until the glass protector had reached somewhere well beyond the midway mark of the CD and the new CD could not be put in once the same point had been reached again on the way back. (The user effectively had to stand ready with the new CD already in the one hand, while he removed the original CD with the other.)

This completely unnecessary motorized solution ruined both the usability and the usefulness of the CD changer, turning a potentially loved helper, in the days before MP3 players and whatnots, into an object of extreme frustration. (I ultimately threw it away when changing apartments.) To boot, it did so while adding considerably to the manufacturing costs and, therefore, price and while considerably increasing the likelihood of a later malfunction through adding more parts that could break.


Excursion on self-gratifying help

A common problem with help is that help is not rendered because it is, in some sense, “deserved”, necessarily needed, generally beneficial, or similar—but because giving help or seeing a certain reaction (especially, in a child) might make the helper feel good. (This likely has some overlap with the “Rule of Rescue”, cf. above, as well as with some other sub-topics.)

Such help is not automatically a bad thing, but (a) it pays to be aware of the true reason for the help, (b) problems of various types can ensue or be worsened, unless care is taken. For instance, it can be that a child that has a more overt display of gratitude after a gift is more likely than a less overt, but equally grateful, child to receive future gifts. (Including cases when the overt gratitude is mostly faked or otherwise calculated.) For instance, a man might gain a feeling of strength or skillfulness from performing some act for a woman that the woman could have performed herself, had she bothered to try. (Ditto, some constellations of adults and children.) For instance, someone with a poor conscience might try to soothe it by giving to the next best charity, putting money in the wrong pockets (be it the charity’s management, some marketing firm, or an African dictator; or be it because the cause of the charity is something evil dolled up to seem good).

Here, it is better to give help on a more reasoned basis than an implicit or explicit “helping makes me feel good”.


Side-note:

An interesting special case is those who help with a calculated eye at being helped in return. Whether this is a good or a bad thing will depend on the circumstances. There is e.g. nothing wrong with helping with a barn raising with an eye at receiving future help when one has an own barn to raise (let alone, with an eye at help already received)—in fact, such cooperations can be beneficial for everyone involved, while harming no outsiders. (See [3] for some further discussion of barn raising.) However, other cases can be problematic, as when the employee of one business uses some of his employers resources to give an employee at another business a personal benefit in the hope that the favor will be returned.



Side-note:

Concerning gratitude, I note that there is a difference between doing good deeds for the purpose of receiving gratitude and merely expecting gratitude for good deeds done anyway. (Cf. some of my own objections to lack of gratitude above.) In particular, a lack of gratitude raises serious questions as to whether someone was worthy of help to begin with, which certainly is a legitimate reason to reduce future help. (While feeling good because of a great overt display is not a legitimate reason to increase help.)

Likewise, if it is found that giving help creates an expectation of help, instead of a willingness to reciprocate, this can be a legitimate reason to be cautious about helping and/or to ensure that there are strings attached. (Such strings can be of a very varied nature, but making reciprocal help explicit is often a natural solution, e.g. in that mutual barn-raising is assured or that a “I will drive you to the airport on Monday” is coupled with a “if you babysit for me on Friday”.)


Excursion on asking for help in the wrong way

As is clear from the above, it is often the case that someone who asks for help is more likely to receive it than someone who does not ask. It might even be that someone who is more intrusive or insistent in asking for help does better. However, moving beyond a certain point is likely to do more harm than good—and what that point is might depend strongly on the intended helper.

For instance, I tend to react very negatively to websites that request donations—because of the manner of asking. I have no objections to, say, a link labelled “donations” or a brief message at the end of a text that “We provide our website free of charge, but we still have expenses. A small donation can help us keep the website running.”. However, when it comes to e.g. a prominent “Donate now!”, I am virtually guaranteed to not donate because this is rude and intrusive and because the use of an imperative is unethical. If I ever visited Wikipedia with JavaScript activated, the result was a big moving block that occupied half the page and screamed for donations—removing any and all wish that I might have had to ever give a dime to Wikipedia. (To boot, donations to Wikipedia appear to go less towards running and improving Wikipedia and more to finance Leftist causes. Correspondingly, no one should ever donate to Wikipedia regardless of how intrusive the requests are.) In the case of the “Daily Sceptic”, intrusive donation demands was a partial reason for why I gave up on the site.


Side-note:

As of the time of writing, 2025-10-28, I have never asked for a donation relating to any of my writings or web presences, I have never done advertising, and I have never charged fees. Moreover, in the early days of the Internet, this applied to the vast majority of websites and whatnots: Writers wrote and published because they enjoined it, because they wanted to inform, because they had something to share, whatnot.

While everyone is entitled to do as he sees fit, there is no true reason for why this should be different today for most sites. (Exceptions include those that provide content in a naturally commercial manner, e.g. streaming services. They do not include the likes of Facebook, Wordpress, and the Stack-Exchange network, which do more to harm than to improve the Internet.)

In particular, there are few sites that have legitimate concerns around traffic volume, because most sites see most of the volume arise through gross waste, including HTML/CSS/JavaScript pages that take up far more space than is warranted to render a particular content, unhelpful and overly large images, and advertising (in as far as provided over the site it self).


Likewise, to continue with donations, it is one thing to ask for a donation and another to require that some specific type of payment should be chosen. At an extreme, I have seen requests for donations in some obscure digital currency or over some obscure digital-currency platform. Well, that is fine and dandy for those who already have a supply of that digital currency and an account at that platform. For the rest of the world, the effort needed to give a donation might amount to many, many times the value of the donation—and potentially involve other monetary costs, security risks, and whatnots. Even more mainstream means of payment are problematic, e.g. in that I would not open a PayPal account just to make a donation to some website and that checks are quite rare in many countries (I, living in Sweden and Germany, have never in my life owned a checkbook and have, in turn, received just several checks in my entirely life).


Side-note:

With such more mainstream methods, the costs and efforts of the intended recipient must not be forgotten. If, say, providing one or two means of payment allows 90 percent of the potential donors to donate, it might not be worth the trouble to go after the remaining 10 percent. Certainly, it will often be unreasonable to expect someone to accept payments in more than one currency (specifically, the one “local” to the someone). However, even here, the recipient must be aware that he is the limiting factor for many potential donors. In particular, he has no right to complain when someone in that 10 percent does not donate. (In as far as anyone has a right to complain over a lack of voluntary contributions in the first place.)

More obscure methods might instead exclude more or much more than 90 percent and the intended recipient truly has only himself to blame.


Throwing a wider net, those who want help might be well advised to “help others help them”—in particular, by accepting help in whatever form it is given and by adapting to the help and the helper. (Within the limits of what is compatible with the underlying purpose.) This begins, of course, with requesting help in a non-limiting manner. Consider e.g. a variation of the collections for newly weds (or soon-to-be-marrieds) above where the newly weds, themselves, ask for help: A request specifically and exclusively for money is very limiting (and not necessarily in the best interest of the couple). What if someone is considering buying a new dishwasher and is willing to donate the still functioning old one in lieu of money? What if someone has an old gift card that he might never use himself? What if someone offers to physically help with a move instead? Etc.


Side-note:

While there might be legitimate reasons to reject the offer of an old dishwasher, e.g. that the couple already has one, the mere fact that it is second-hand is not among them. If someone is so well off that he can frown upon a second-hand dishwasher, merely based on the “second-hand”, he is also so well off that he should not ask for this type donation.


Other ways of reducing the chance of help when asking (with me; as above, the situation might be different with others) include being impolite or presumptuous, failing to actually ask (while hinting and hoping for a pseudo-spontaneous offer of help), attempting to butter me up or to flirt with me before asking (common with women), giving an otherwise manipulative vibe, and being annoying. In fact, portions of the discussion of websites and donations go back to exactly themes like being impolite, presumptuous, and/or annoying.


Side-note:

Some ways of increasing (as opposed to decreasing or not decreasing) the chance of help include explaining why the help is needed, (truthfully) making clear that prior own attempts have taken place, and offering some type of compensation or tit-for-tat. (Also see others parts of this page.)


Excursion on reactions when help is removed

Reading the second book in the “The Eminence in Shadow” series, I encountered a scene from a non-human (and by human standards inhumane) society, where an extremely young girl, the character later to become Delta, is suddenly no longer given food—and reacts by, with success, going out in the forest to hunt. (Apparently, this was possible for her species and/or for her personally. A human child would have been in severe trouble.)

With an eye at this page, we can now ask many questions (e.g. when it is reasonable to remove such basic help from whom, whether the help was needed or mostly served to hold the helped back, and similar), but a particularly interesting family of questions revolves around the reactions of the previously helped, why these reactions arise, how reasonable or unreasonable the reactions are, etc. For instance, we might imagine a (considerably older) human child left to fend for herself, for whatever reason, and consider outcomes like a successful own “fending” (e.g. through hunting, gathering, employment, whatnot), starvation, begging from third parties, anger and protests, pleading for a resumption of help, and similar. (Note that several reactions can apply to the same case, e.g. in that the child first pleads, then takes up unsuccessful-because-unskilled hunting while alternately going hungry and begging, then hunts successfully as experience improves skills.) For instance, we might imagine an adult human who used to receive government handouts and suddenly no longer does (without the situation having changed in other regards—as opposed to e.g. someone who no longer receives unemployment benefits after having entered employment).


Side-note:

Begging from third parties is, of course, also a type of help, but a type sufficiently different from long-term parental help that it can be put in a separate category for the current purposes.

Similarly, note the difference between e.g. parental help that is long-term and extensive vs. such help that follows on a once-in-a-blue-moon basis. With an eye at the below, it might e.g. be that someone claims that a 25 y.o. should largely stand on her own two legs but does not raise objections to even extensive parental help in an emergency or a one-off situation. Contrast e.g., for that 25 y.o., a continued allowance vs. help to cover an expensive medical treatment or an interest-free loan to buy a car. Ditto extensive monetary help vs. lesser and non-monetary help, as with an allowance for food costs vs. the occasional home-cooked meal.


I will not attempt to answer such questions, but I offer them as food for thought, including what might apply between parents and children at different ages (e.g. at 5, 15, 25, ...), between parents and children that are or are not handicapped, between the government and citizens in different circumstances, and other constellations.

In a next step, it can be interesting to e.g. consider how how two previously helped persons with different attitudes might view each other and the other’s reactions. (Possibly, including cases like a parent who got by without help from others at 25 lacking enthusiasm over a child at 25 who still relies on her parents.)