Michael Eriksson
A Swede in Germany
Home » Software development » Databases | About me Impressum Contact Sitemap

Advice on direct alterations


Direct (non-programmatic) data[base] alteration is something often done poorly—and almost always suboptimally. Here I will give some recommendations on how to do it better. I caution, however, that some of the advice need not apply in all contexts, might need minor modification to suit local circumstances, and might be entirely irrelevant to e.g. most desktop applications. Notably, the below was written during the end-phases of a project where we had been forced to write dozens of scripts with hundreds of alterations for every release (due in part to poorly thought-through and prescribed-by-the-business-side workflows, interfaces, etc., and in part to unschooled and unproficient users).


A recurring problem throughout my career is that important decisions are left to those who lack the brains, knowledge, insight, whatnot to make these decisions. (Including constellations where a newly graduated business bachelor, whose computer knowledge begins and ends with MS Office, is made product manager and dictates user interfaces to developers who have years of experience and actually have taken courses or read books on the design of user interfaces.)

This might have been the single worst project that I have ever encountered, however.

I often silently assume the use of an SQL-based relational DBMS; however, similar ideas will usually apply to databases with another type of interface or storage structure. In terms of terminology, syntax, whatnot, I might use some Oracle-cisms.


Re-visiting this text, I also find that my, at the time of original writing, strong use of Hibernate shines through, but this is unlikely to reduce the generality—and keeping e.g. a version number for a table entry can be a good idea even in a plain SQL-setting.

Generally, but off topic, the value of “administrative” or “bookkeeping” fields should not be underestimated.

Obviously, the importance of the below will vary depending on the importance of the data (including by measures such as the number of users and consequences of a data error/inconsistency). It might also be that different rules apply depending on issues like performance characteristics—one size rarely fits all.

A number of more general best practices, guidelines, whatnot, might apply in various situations without being explicitly mentioned—e.g. the benefits of peer review or some general guidelines for commenting.


The phrase “direct alteration” is an ad-hoc term for this page.

I take “direct” to include any manipulation that by-passes the normal channels of the application(s) in question, normally through some form of SQL-client (in interactive or in batch mode). Contrast e.g. (a) a GUI application that provides various fix user actions and makes various checks, applies various business logic, etc., when a user action is taken, with (b) a manually written and executed SQL statement.

(By implication, when I speak of e.g. “user interface” and “end-user”, it is assumed that such a GUI application, or some similar mechanism, is used.)

I take “alteration” in a potentially wide meaning: Many entries will be mostly relevant to DML changes; however, others generalize to (or are even more important for) DDL changes. (The use of “alteration” over e.g. “manipulation” is deliberate, seeing that other words are easily understood in a more narrow meaning—e.g. as in DML / Data Manipulation Language.)

Types of changes

There are different types of changes that can be relevant. For the purposes of this discussion, I will assume that they are limited to those most likely to be relevant to developers (and, in the third case, high-level support, should this be separate from development), viz.:

  1. To the structure of the DB, e.g. what tables are present and what fields they have.

  2. To reference data, e.g. what states an entity can have and what currencies are available.

  3. To other “table” data, e.g. customer records, invoices, etc.

    (The border is not always clear, however. Consider e.g. exchange rates between currencies. Normally, I would count them here, but in the case of e.g. Euro to/from its predecessor currencies, the permanently fixed courses could conceivably be viewed as reference data. Of course, all three categories can be made more granular, should the need arise.)

(Similar remarks often apply to other changes, however, e.g. the administration of database users.)

The rules for these three categories can be a little different, e.g. in that it might not make sense to replay (cf. below) scripts that alter non-reference data; that the three types of changes should usually (!) not be intermingled in the same script; that reference data should usually have a prescribed primary key (to ensure consistency on e.g. test/staging/production systems), while this is usually not even possible with other data; and that GUIs should usually be agnostic to reference data.

I will gloss over such differences below, to keep the text simpler and as the ramifications of local policies can go well beyond the current topic, but I recommend keeping them in mind.

Consider alternatives to direct alterations

Some alterations, by their nature, are best done directly. However, in many cases, the need for an alteration implies that a feature is missing from the user or admin interfaces—and the superior alternative could be to simply add this feature. Consider e.g. an end-user accidentally deleting a particular entity and wishing for its restoration: If the loss is actually deemed important enough to warrant a direct alteration, there is a very fair chance that it is also important enough to warrant a new feature. Beware, in particular, that “just this once” changes often turn out to be recurring: It is better to develop the new feature now—not after fifty other entities have been accidentally deleted and restored through fifty individual scripts. Also note the remarks on emulating business logic further down—if the change goes through the actual application, such emulation is no longer necessary.


In some cases, such a feature might already be present but be unknown to the user, only be available with admin rights, whatnot. If so, arrange for someone with appropriate skills and rights to correct the error through, literally or metaphorically, pressing the right button.

Similarly, there might be other alternatives present in a given situation (e.g. a user workaround; sometimes it might even be better to allow some incorrect constellation to remain uncorrected, rather than attacking a cold with a cure that afterwards turns out to have been an injection with smallpox).

This is particularly important when changing states of entities: Potential problems include inconsistent overall states after altering sub-states, the business logic of the application illegally running an entity through the same process step more than once (because a state has been manually reset), odd side-effects, and general weirdness.


The last point is sufficiently important that it could be repeated in the text of a majority of the following entries. I will not do so, but I urge to keep this in mind when reading on.

Prefer scripts to manual changes / how to handle scripts

Alterations intended to be permanent and/or be used in a production environment should be done per script—not per manual execution of an SQL statement in a client. Under no circumstances should error prone editing of values in a spreadsheet-like GUI be done.

These scripts are to be preserved (e.g. in a version-control repository) in such a manner that the changes done to the database can be replayed in order from day one to today.

Further, they should be sufficiently well-documented that it is clear for what purpose they were written (including references to e.g. related tickets), and that it is easily possible to find out on whose authorisation/request they where executed in the production environment (need not be in the script it self, but e.g. in a referenced ticket). Ditto when and by whom they were executed.

In this manner, it becomes far easier to understand what changes led to the current state, find out what went wrong where and when, reproduce a certain situation, etc.


Use of such scripts might be outright necessary or particularly beneficial when the developers (or whoever else is responsible for the alterations) do not have sufficient access to the production DB to make the changes themselves, and instead have to rely on e.g. a DBA.

(This will often be the case and might be an outright “best practice” for all but the smallest projects. However, I will gloss over such complications elsewhere.)

There might also be cases where external regulations require a certain level of auditing or whatnot, and scripts make this so much easier. (Banking is an area with particularly strict rules in many or most jurisdictions.)

Test changes in some form of staging area

Make sure to test all changes intended for the production system in a staging area / on a reference system, preferably with (at least the relevant) data identical to the production system.

Even a simple change can contain some easy-to-overlook, but crucial, detail. Finding and correcting this detail before the production change is so much better than after the production change.

(If the similarity between the systems is large enough, it is also possible to discover some performance problems in time, e.g. because an unfortunate execution plan or a missing index forces a full table scan over a giant table.)

Make alterations with the risk of a changed situation in mind

It is not uncommon that the data to be altered has already been changed in some other manner between the writing of the script and the execution of the script. By catching such alterations, many data inconsistencies and unwanted effects can be avoided.

Exactly what checks should be made will vary from case to case. For an UPDATE a SET b = [c] WHERE id = [d] it might suffice to merely check that b has the expected pre-update value (... AND b = [e]). In other cases, however, the overall values might be relevant and e.g. a ... AND version = [f] might be the way to go (where I assume the presence of a version field which is incremented on every change).

Beware that it is not enough to just have this check in the SQL code. It is vital that discrepancies between the expected and actual outcomes are manually inspected and resolved.


Exactly how to do this will depend on the circumstances and local conventions at hand.

A must, however, is that whoever executes the script has been informed about the expected outcome, e.g. in terms of rows to be changed, and that someone with sufficient knowledge of the change (preferably, the script author) is available on short notice.

If possible, it is usually best to wait with commits until the correctness of various changes has been verified, and the executor must know how commits in the script are handled or not handled. (An exception where a rapid commit might be preferable is when a lack of commit would block production activity in a disproportionate manner, and a temporary data error could be the lesser evil. Keep in mind that e.g. Oracle implicitly commits upon executing DDL.)

If the circumstances, including speed of update and number of rows to update, allow it, updating each row separately is usually preferable, as this makes for easier error search and gives greater flexibility. (E.g. in that ten incorrect invoices are given an UPDATE each.)

Mark all altered rows as directly altered

When direct alterations are made, always mark the altered rows in some manner, e.g. by having a flag indicating that the current entry has been created or altered by a script. If direct changes are common, consider using a counter to indicate the actual number of changes. (Multiple changes are a sign of a bigger problem and attacking this problem is likely to be more urgent. However, the culture, constraints, whatnot, that brought the problems will often prevent improvements.) If they are rare, adding the name of the script and a timestamp corresponding to its execution might be an option.


In the absence of such fields, there might be other hints to look for. For instance, if entries have alphanumerical IDs, the convention can be helpful that entries added by the application have lower-case IDs and those by scripts upper-case. For instance, if entries have fields for “creator” and “updater”, these can be used to indicate a script (if need be through a “technical user”). Absent such deliberate conventions, there are often subtle differences that can help, e.g. that such IDs are accidentally different or that the application fills some fields that the scripts have no reason to bother with.

If space allows it, consider keeping a more extensive log, e.g. a separate table indicating the ID and table name of the changed row together with the changing scripts—or even the pre- and post-values.

Update version numbers and the like

By updating the version number of the row (if present) tools like Hibernate can detect the change made and cancel any concurrent changes. Further, if an entity has a sufficiently simple life-cycle, there is often a perfect or near-perfect correlation between version number and state. By updating this number, confusion can be reduced or clues given as to what happened in the past.

If the entire table is versioned (e.g. through keeping previous versions of the data in a secondary table; something well-worth considering when space allows it) make sure that the versioning reflects the direct alterations.

Strive to emulate everything done by the application

If a certain change corresponds to a state change normally done by the application, make sure that all relevant behavior of the application is emulated—and is so in the proper order. (Not all behavior is necessarily relevant, but it is better to err on the side of caution.) Note especially the possibility that there might be side-effects outside the database (e.g. a message sent to a third-party) that also have to be emulated. The same applies, m.m., to cases where the changes are dictated by the business logic, even when not actually foreseen—e.g. when an operation that cannot be reversed through the application is reversed by direct alteration, and some several steps must be undone in reverse order to compensate for the original action.

Consider widening the change

In many cases, it makes sense to widen the scope of an alteration. Assume e.g. that that an entry in a central DB is in an inconsistent state that the application cannot handle; further, that this is known through a user complaint. Merely changing this one entry will solve the problem for the one user; however, there is a more than fair chance that the error is not unique—and it makes sense to check the rest of the table in question for other entries with the same problem. (Whether these should be corrected in a blanket manner or on a case-by-case basis, be ignored, whatnot, will depend on the circumstances.)

Avoid hacks

In the short-term, it can be tempting to perform various hacks to achieve a temporary effect. This is usually a bad idea, as the temporary might turn out to be a long time, as the hacks often rely on rules that might later change, and as they tend to be poorly documented. For instance, I have seen the ad-hoc use of negative state numbers to temporarily move entries out of various listings, e.g. in that an entry had the state “5” and was now changed to “-5”, with the intent that a manual reader would recognize the true state as “5”, while various views and SQL-statements that made selects based on an enumerated set of state values would miss the entry entirely. This with the intent that some update or other could be postponed until further information from a third-party was available. Later, some other script used a similar means to avoid inclusion—but with a different meaning for “-5”...


Note the contrast to a systematic use of negative state numbers to achieve a similar effect in a documented, known-to-all, and planned-from-the-beginning manner. The implication is that the views and whatnots know, in principle, that e.g. “-5” is a valid value, but one that they deliberately choose to ignore or include. Indeed, with such systematic use, there is no particular reason to use negative values, beyond a mere convenience for those who work with the tables on a low level—any value not already spoken for would have done just as well.

Here foreign-key constraints to the table describing the states can be very helpful with enforcement. (Generally, I recommend being strict with foreign keys and other data-integrity helpers, and to not give any advance consideration to those who predict performance issues. In the rare cases when a performance problem actually manifests, the foreign key can always be removed on an individual basis.)

Of course, attempts to impose or rely on a particular semantic of “raw” values can backfire and should be rare exceptions. Consider, as an example of how even a very reasonable assumption can fail, the way that a certain prefix to a German cellphone number used to indicate the telephone provider. Originally, this might even have been a near necessity. Today, a customer has the right to keep his number when switching providers, and the prefix allows at best a guess as to the provider and/or is an indication merely of what the original provider for that number was.