Michael Eriksson
A Swede in Germany
Home » Software development | About me Impressum Contact Sitemap

One head is better than eight legs

Introduction

Original (2012)

Two principles that I have observed again and again are beautifully illustrated by a recent event in a project:

  1. A little extra headwork can save a lot legwork.

  2. Two (or more) heads are not always better than one. In particular, the more heads present, the less likely it is that the best head (or, in a larger group, best few heads) are given enough room; and the more heads are present, the harder it becomes to work efficiently.

In conjunction, these lead me to the title of this page. (Also see excursion.)


Side-note:

The second item should not be seen as a rejection of team-work, which can be beneficial by combining skills that are divided over several individuals, or necessary to get something done in time. (The latter illustrating the difference between being effective and being efficient.)

Instead, it points to dangers with e.g. poorly organized work or teams where a hierarchy of ability is not reflected in the team hierarchy (be it formal, informal, or co-incidental; here I was initially “lower” in the “hierarchy” through not being involved from the beginning, not being formally assigned to the task, and largely not being the one sitting at the keyboard—a role mostly occupied by the owner of the computer we worked at.) See also the discussion of “group work” below.


2024 remarks

This is one of many texts written in 2012 but only published beginning in 2023. Except for some minor changes (mostly, language), the odd addendum, and an excursion on the title, the below text corresponds to the unpublished 2012 version.

The years in between have not brought a second example of similar illustrative value, but the same issue has often manifested on a lesser scale. In particular, solving problems by standing around a monitor is only rarely a good approach. (It can, however, be a hard-to-avoid approach, as factors like personal curiosity and a wish to participate can overcome the knowledge that one does actually contribute in a productive manner.) It is usually better to get rid of the by-standers, to limit them to those actually needed, or to let them work independently on the problem at their own computers. The same applies, m.m., to many problems outside software development.


Side-note:

A good example of “those actually needed” is a single domain expert, e.g. that a developer does the computer work while consulting with a product manager, who brings in knowledge that the developer does not have on e.g. the business implications of this-or-that decision, the meaning of some data, what data might or might not have been delivered incorrectly from a third party, or similar.

Another, when a certain problem cuts across areas of responsibility and one representative from each area is needed.


Coincidentally, I wrote a new/2024 text on collective decision-making ([1]) just two weeks before taking up the current text—an unfortunate timing, as the former could have been improved by drawing on the latter. (With the sheer number of texts written over the years, and the number of years, I do not always remember what texts exist.)

While that new text is not the (probably!) still unwritten text on “grupparbete” mentioned below, it does cover some of the intended ground. This, in particular, in an excursion on group work.

The event

The project (or, rather, sub-project of the behemoth project) stood before a large set of per-SQL corrections of the production database to compensate for user errors and incorrect data delivered from external databases. During work on these corrections, it had become clear that at least some of them overlapped in terms of database rows altered. With the changes distributed over roughly twenty scripts, equally many “problem tickets”, and even more “incident tickets”, it was a less than trivial problem to check for inconsistent changes. (In particular, as the query criteria varied from case to case, using either specific ids or general criteria, e.g. some combination of state flags.)

The original plan, before my involvement, was to run the scripts on a copy of the production data, and to dynamically log the id of the entry altered, the table, and the name of the script, using a set of ad-hoc database triggers. (A “reference system”, which was updated to match production data once a week, existed.) In this way, it was believed, the expected handful of duplicates could be easily identified and checked manually, making a deeper manual investigation of all the various scripts redundant.


Addendum:

While having such a reference system is a good idea (and running various scripts there first is a very good idea), some caution is needed, as there might be critical deviations between the reference and production systems, e.g. because of changes to the production system since the last update and necessary anonymization on the reference system—not to mention the effects of testing more than one script or having ongoing testing of user interfaces that happen to change data on the reference system. Depending on how the update is made, there might also be problems with differences in identifiers or various static data.

By 2024, I do not remember how the above reference system was updated, but it might well have been by a complete copy, which is the ideal. Many projects use lazier versions, however. In one later project, the closest equivalent was a “staging system”, where data was only copied upon request and in the minimal amount needed to, e.g., test a script. The results were highly unreliable tests and issues like SQL statements that seemed fast enough, but were not, because they ran on much smaller tables on the staging system than on the production system—the one-time investment to implement a better approach would have paid off manifold.

The number of different systems and roles of systems can also vary depending on local needs, size of budget, stringency in approach, whatnot.


Then came the day of the test run: Three persons (the two main writers of scripts and the man responsible for coordinating work on tickets and bug reports) set about evaluating the logs. They soon became unruly: There were in excess of seven hundred (!) duplicates, several occurring more than twice. After possibly twenty minutes of them getting nowhere, I joined them, having nothing better to do at the time. With four persons at the same computer (and with my being entirely unfamiliar with the logged information up till then), I was not able to direct the team to work more effectively.

Indeed, there was talk of checking each and every duplicate manually, which one of the script writers estimated at half-an-hour each (investigating the DB, comparing the scripts involved, digging up the right tickets and check for what was actually wanted; however, her estimate could have been on the pessimistic side) based on her original work. We now had 350–400 (!) man-hours, divisible over a maximum of seven persons. There was no chance of completing this check before the scheduled production run just a few days later. (Checking only a representative subset might have been a way out; however, this required that no errors were found and was risky, because the tolerance for new errors in this particular project situation was very low—with end-users who non-negotiably needed to complete certain tasks before our next time-slot for DB-corrections some four weeks later.)


Addendum:

I do not remember the details of this scheduling, but it was likely based on the presumption that users could interact with the system at wildly varying times of day (which increased the risk of interference) in combination with a wish to minimize formal maintenance windows. While such reasoning can be justified, so long intervals between DB-corrections can be a problem in their own right and likely contributed considerably to the collisions discussed here. Moreover, they delay the work of the users, as work cannot proceed as intended before the correction actually takes place.

I would recommend keeping any such fix intervals as short as conscionable. If in doubt, more frequent windows make the work in each window less likely to cause interference.

If needed, times outside regular working hours should be found, be it through additional payments or through shaking a “Monday to Friday, nine to five” mentality.

As an aside, during my first few years as a software developer, everyone seemed to have an attitude of doing things when they needed to be done, including evening and weekend installs/updates/whatnot. In most recent experiences, I have been met by an attitude of astonishment that someone would even suggest an install outside regular working hours—and this measured at the presumed end of the install, not the beginning. (And involuntary limitations, e.g. that building security clears everyone out no later than eight, are somewhat common.)


I went back to my own computer and started to look at the log table on my own. (Partly to be able to try a few things without being held back by the committee; partly because I, correctly, suspected that we should gain a deeper understanding first and act later—which was not happening in the committee.) It soon became obvious that some scripts had altered several rows on repeated occasions (in a legitimate manner, as later inspections of the scripts showed) and that there were a few scripts that overlapped far more often than others. I excluded the apparent (from brief visual inspection) “top offender” from the query, executed it again, and repeated. Within five executions (and five offending scripts), I had reduced the number of unaccounted duplicates to seventeen.


Side-note:

Even this was a sub-optimal solution: From the first impression, I had expected to just remove one or two scripts, before being left with (on the order of) seventeen cases. As is, I would likely have been better off with a query that simply listed the number of duplicates per pair of over-lapping scripts.

As a rule-of-thumb: Short-term gains by quick-and-dirty tend to be over-come already by the mid-term gains of the clean approach.


I presented this finding to the group (which by now had grown by yet another developer). It took some time to convince them to focus on comparing scripts (rather than individual rows), but, once there, things began to run smoothly: We built a list of scripts that pairwise over-lapped through another query. The total was no more than thirteen cases, most of which (including overlaps among the big-5 above) could be ruled as unproblematic after a short inspection. (Either changes that were compatible with one another or repetitions of the same change through duplicate reporting by end-users.) There remained two or three small scripts with just several rows each that needed to be checked against larger scripts on the row level—and, again, these detailed checks gave us a clean bill of health in just a few minutes. (Of course, if we had been unfortunate, these tests could have lasted longer and required looking at individual tickets—or lead to an “unclean bill”. Cf. above.)


Side-note:

A particular interesting example (and a clear sign that the original ticket/correction work had not been well organized) is given by a script that updated 79 rows: All 79 (!) were present in another script; 78 (!) were present in a third script. (IIRC, there was also overlap with individual rows in several other scripts.)

Not only is this a potential source of database inconsistencies and incorrect data, but also a waste of time: Using the above half-hour-per-row estimate, the total preparatory work for these 79 rows went from roughly one man-week to roughly three man-weeks... (Even with a lower time-estimate, the numbers are depressing.)


Addendum:

Here, I am a little uncertain what my intentions were. Going by words, the implication seems to be that whoever wrote the script(s) would have needed around one man-week per script, thus going from roughly one man-week to roughly three.

Speaking generally, this sounds excessive, which makes me slightly doubtful; however, my direct memories in 2024 are virtually nil and chances are that the above made more sense based on the fresh memories of 2012. (Is the difference between “half-hour-per-row”, as given immediately above, and “half-an-hour each”, at the beginning, of importance? Had the script writer implied that “it took me about half-an-hour per row back then; it will take me about half-an-hour per row now” or something else? Today, I cannot say.)

Moreover, it must be kept in mind that these 79 rows likely did not arise from a single large ticket—but from 79 individual tickets that needed detailed independent investigation and had afterwards been accumulated into one script. (With the repetition arising through double/triple reporting-by-ticket of the same problem or through the same data being exposed to more than one problem—or, worst case, uncoordinated developers who did redundant work.)



While I cannot give time measurements, I would estimate that my acquainting myself with the data, reframing the problem, and identifying the big-5 might have taken 15–20 minutes (one person), with the script check being done in roughly half-an-hour (five persons, not all of whom were actually necessary): All-in-all, less than three man-hours. If stripping the team down to the persons who actually needed to be involved at each step, this would probably have been less than two man-hours. In contrast, the “problem-solving by committee” cost at least one hour at three to five man (varying over time), giving a low estimate of four man-hours of wasted time, during which virtually nothing was achieved. With the originally feared hundreds of man-hours, there is no comparison.

How I differed

Unfortunately, I am not able to give a detailed analysis of how I worked differently—apart from taking time off from the committee. (For a number of reasons, including vague memories of details, inability to read the minds of the others, and the complexity of the question.) However, a few points:

  1. I did more data gathering and focused on getting a deeper understanding than did the committee.

  2. I was better versed in SQL than the others and was able to get wished for results faster than they.

  3. I worked with SQL queries in the tool used (Oracle SQL Developer) from the start, while they did much work using the tabular GUI views and simple one-field filtering and sorting.


    Side-note:

    The use of the GUI view is a bad habit that I have observed again and again among these colleagues: The GUI view gains time when doing something very easy once (or very rarely), e.g. finding an individual row with id = x for a brief check. For anything involving joins, checking multiple attributes, doing the same operation with several inputs, actually changing data, ..., SQL is almost always the better choice.

    (Incidentally, another example of clean beating quick-and-dirty—even if the relative degree of dirtiness will not be obvious to many of the readers.)


Misconceptions around group work

Unfortunately, I fear that a lesson from school is also repeated:

The lesser heads fail to see that “group work” was a part of the problem, not the solution. In particular, I did not have the impression that the rest of the group fully realized the impact that I had on the process through moving the focus from database rows to scripts—something which was critical to the reasonably timely resolution of the issue. Indeed, in the time leading up to the actual checking of the scripts, almost all the work that actually brought us closer to the solution was done by me. The two exceptions were by the same developer... (And none of them required much in terms of skill: The original evaluation indicating more than seven hundred duplicates and an SQL-query to give a listing of the scripts that duplicated each other—a standard join of the log table with it self, which needed my correction to remove symmetrical and redundant entries, moving us from 26 to the correct 13.)

On the contrary, I fear that at least one of the other participants, by a statement made, came off with the exact opposite impression—throw in enough heads and everything will turn out well.


Addendum:

Note my warnings in [1] on how the success or “success” of group work can be misinterpreted by those who do not understand what actually goes on.



Side-note:

I plan to someday write an article about group work in school and issues relating to similar forms of team work in general. Until I get there, however, let it be said that group work as used in my Swedish schooling is disastrously inefficient, of dubious effectiveness, and very far from the type of team work that can ideally be seen. Certainly, it does very little to fulfill its ostensible purpose—to help the students learn how to achieve something in a team. Notably, the impression of the team’s efforts and results can be very, very different from the respective perspective of an A- and a D-student.

As an aside on terminology, I am not certain that the Swedish “grupparbete” is correctly translated as “group work”; however, imagine dividing the class into groups of three to six persons of immensely varied interests, priorities, and abilities, in order to e.g. write a report or complete some other form of project. The reader will almost certainly have experienced the same during his own schooling. (And this “group work” should not be confused with “team work” in general.)


Disclaimer on simplifications

I have simplified the events in several regards to focus on the main issue and to draw the main lessons. (Time estimates and similar refer to the “stream-lined” version of events.)

Most notably, I have excluded the problem of the script field in the log table having the value “JDBC Thin Client” in a handful of cases—the explanation was that the sysadmins had failed to shut the servers down before running the scripts (we did not have corresponding rights on the reference system) and that tests done by other departments through the web-interface were also being logged.

Excursion on the title (2024)

The title “One head is better than eight legs”, is almost certainly a play on the old Swedish children’s show “Fem myror är fler än fyra elefanter”, which had an educational angle similar to “Sesame Street”. This play is, obviously, lost on a non-Swedish audience—maybe, even a Swedish audience from the “wrong” generation.

However, the name of the show is thought-worthy with regard to the above (and in general):

The underlying issue is a difference between two Swedish words, “fler” (“more”, in the sense of a greater number) and “mer” (“more”, in a general sense). We then have the correct, but not understood by many children, claim that “Five ants are fler than four elephants”—no matter how much larger the elephants are. In contrast, going by e.g. volume or mass, “Four elephants are mer than five ants”. There is also the common issue of use of “mer” when “fler” is called for—a point where even adults occasionally err.


Side-note:

For a clearer example, consider the combinations “many tomatoes”/“more tomatoes” and “much tomato soup”/“more tomato soup”, where English has a division on the first level only and Swedish has a division on both levels: “många tomater”/“fler tomater” respectively “mycket tomatsoppa”/“mer tomatsoppa”.


The intent behind the name of the TV show was likely a matter of (a) pointing to a common error when using Swedish (use of “mer” when “fler” is called for), (b) making the children think about the meaning and implications of words (“fler” has a different implication than “mer”), and (c) setting the tone for the show. However, interesting and more general observations include that being fler is not necessarily better (for many purposes, four elephants are more useful than five ants) and that we need to think about the implications of various numbers in the context at hand. For instance, looking at my own title, a single head is often more useful than any number of legs, while even one productive head can be more valuable than four unproductive ones.

(Note that whether a head is productive does not necessarily depend on the quality of the head, per se. In situations like the above, we can also have issues like one head in front of the computer being hindered by interference from three “backseat driver” heads.)