Michael Eriksson
A Swede in Germany
Home » Software development | About me Impressum Contact Sitemap

Applications with Internet access

Introduction

Far too many applications today assume that the user’s computer is connected to the Internet or that they are allowed to use this Internet connection as they see fit. Below some cases are discussed in more detail, with a focus on if and when this can be considered acceptable.


Addendum:

Cloud-software/-services were not a significant issue at the time of original writing (2012), but are now rampant. Generally, a drift towards more functionality on the Internet and less on computers seems to have taken place, and an extension of the text might follow at a later time. For now, I simply note that cloud-whatnots are (a) contrary to much of what is said below, (b) usually much more favorable to the service providers than to the users, who might see very considerable disadvantages, like the ones discussed below.

Smartphones are another issue that was not very important (at least, to me) at the time of original writing. Many statements can be trivially modified to apply to smartphones equally as to computers, and I will likely not extend the text in this regard. An important side-issue is that users of smartphones (as of 2023) often still have upper limits on included data use, which can make uncontrolled Internet access, especially for large updates, very problematic. The same applies to those (for the last few years, including me) who access the Internet from a computer by means of a tethered smartphone.

(Minor other alterations have been made, however.)


Rules of thumb

To start, two rules of thumb for makers of software applications:

Firstly, never assume that Internet access is possible or allowed, but guarantee that the application works even without the Internet.

Note, in particular, the possibility of deliberate offline use, intervening layers that might block access (firewalls, proxies, Cloudflare, etc.), and computers that for some reason do not have an Internet connection at all—e.g. due to a recent move or malfunctioning connection hardware. It might be tempting to assume that every user has an active connection at all times, but even today (2012) this is far from true, it need not be true in the future, and there is always the risk that the user exercises his legitimate right to deny certain applications Internet access.

Secondly, always request the user’s permission before starting an Internet access. (Cf. the below discussion of dialing home.) Exception: This does not apply when an implicit consent can be assumed present; nor when the access is the result of a user action and it must be obvious to a typical user that an Internet access could be necessary. An example of the former is a local proxy relaying requests from a browser; of the latter, a browser reacting to a user clicking on a link. (Note that a click on something, even something that looks like a link, in a non-browser is usually not an example. Ditto e.g. a click on something in the user interface of a browser. This includes, cf. below, something marked “help”.)

License checks

A convenient way for an application maker to check for a valid license is to use a dynamic online verification. While this wish is understandable, it implies that the application is not usable without an Internet connection—even when its services have nothing to do with the Internet. (Ditto e.g. when a firewall interferes in an unfortunate manner. The same can apply elsewhere without explicit mention.)

Further, if slightly off-topic, there is the unconscionable risk that the application becomes unusable because the manufacturer has gone bankrupt, has an internal server malfunction, has arbitrarily decided that the servers are no longer needed (e.g. to implement an inexcusable variation of “planned obsolescence”), whatnot.

Correspondingly, these checks cannot be justified, due to disproportionate violations of the customers legitimate interests.

Exception: If the application by its nature cannot work without foreign servers, a corresponding check can still be acceptable. Consider e.g. an extreme thin-client which merely serves as an interface between user and server. Note, in contrast, that even email clients and other tools with a strong server dependency typical have considerable functionality that is not dependent on the server (e.g. for browsing locally stored emails)—and are not among the exceptions.


Side-note:

In addition, like so many other anti-piracy measures, such checks have a limited effectiveness against professional piracy: They might prevent ordinary users from sharing software, but the professionals have other resources and capabilities—and will often be able to work around the checks.

Even when the checks do work, they often have negative consequences. For instance, there are many users who have had to leap through hoops to get a software installation (notably Windows) working again after upgrading hardware: The checks recognize the changes, draw the incorrect conclusion that the software has been illegally installed on second computer, and block the use. (Similar problems apply when an installation is moved from one computer to another.)


Updates

Updates of the software can be a very good thing; however, care should be taken.

Most importantly, the user must be given the right to replace automatic updates with manual; updates must not be silently made but request individual user confirmation for each of check, download, and install (per default—there must be options to deactivate/activate these requests); updates must be postponable to a more suitable time; and a refusal or inability to update must not interfere with the pre-update workings of the application.

Obviously, in order to allow the user to make informed decisions, even pre-download, he must have the ability to see the size of the change/download and to know what has changed. (While the size should be shown outright and automatically, a listing of changes might be put in some document that the user can access in a conscionable manner.) Further, if an update misbehaves (e.g. by introducing new bugs), he must have the ability to return to the previous state.

Basic rule: The current computer belongs to the user and what happens to that computer is his decision and his decision alone.

Obviously, it is entirely unacceptable to abuse updates to e.g. remove previous functionality (unless to temporarily resolve a severe security problem). A good example of not how to do it: Until 2009 (or thereabouts) German users of iPhones had the ability to “tether” their iPhones; however, this ability was removed over an update—to give Deutsche Telekom (the then exclusive network provider for iPhones) the ability to “provide” tethering as an additional, for-a-fee service. Inexcusable!


Addendum:

By 2023, update-mania is far worse than in 2012—and often irrational. Updates are important, but they should be handled in a sensible manner, and we should keep in mind that the main purpose of automatic updates is not security, but to make life easier for the software maker and/or give the software maker a greater control over the user’s situation. (Which almost always is against the best interests of the user.)

For an example of idiotic priorities, my own main problem (as a user of Linux and mostly OSS tools) has been Firefox/TorBrowser and its insistence on automatic updates—for a software that comes with JavaScript activated per default, which is a far worse security risk than being out-of-date. (And where a continual worsening of the browser has given proficient users strong incentives to not update, which, in turn, might explain why it has become harder and harder to avoid forced updates...) Generally, it is less important to be up-to-date than to take reasonable security measures, e.g. to minimize exposure to JavaScript in a browser, to not open files of unknown providence without proper checks, to shield any private server/daemon process against access from other computers, etc.

A very important point is that is incorrect to assume that newer versions of a given software will have fewer security holes than older versions. On the contrary, the way that software tends to be developed today, chances are that the exact opposite applies, and that newer versions are, per se, less and less secure. The main advantage of newer versions is something else, namely that there are (ideally...) fewer known security holes. Correspondingly, remaining with an older version with no known security holes is often the more (!) secure choice. (Alternatively, no known security holes above an acceptable severity or none that are likely to actually apply, e.g. because they relate to JavaScript in a browser with JavaScript deactivated.)

Indeed, a version 5.12.6 will usually have seen a number of holes closed, which gives it a further edge over version 6.0.0. Chances are that 5.12.6 will be less dangerous than 5.0.0 by dint of fixes, while 5.0.0 will be less dangerous than 6.0.0 by dint of e.g. having less feature bloat and, therefore, fewer points where a security hole could exist.

A related problem with specifically automatic updates is that they force the installation of versions where unknown problems might soon become known. Unless a certain update is a security fix, it is usually better to wait for some time, so that new problems might be discovered and the user can either install with a good conscience (no problems known) or await a fixed version (problems known but removed). An automatic update, however, will usually try to install any and all new versions at the earliest possible date.


Documentation

Having documentation only online has the advantage that it can easily be kept current. However, the disadvantages for the user outweigh the benefits. (The reasoning and examples are as above, m.m.) This goes in particular for various Internet tools (e.g. browsers)—imagine trying to troubleshoot why a browser cannot access the Internet and finding that the documentation is only available on ... the Internet.

I strongly recommend that any installation tool contain the explicit option to install/not install the documentation locally.

(For software installed in the Unix-verse, a good man page is mandatory, regardless of other documentation.)

Dial home

Many applications engage in the practice of “dialing home”. Except for the special cases already discussed above, there mostly remains the highly unethical practice of spying on the users—which is obviously something that should not be done. Off the top-of-my-head, the only legitimate reason (beyond those discussed in previous headings) is gathering information on software errors and other problems. These must only be sent with the explicit consent of the user.

Should there be other legitimate reasons, the user’s consent is obviously required for these too. Equally obviously, a refusal of consent must never lead to artificial restrictions on use.


Addendum:

A potential further issue is that third parties might be able to deduce what software is used when, where, and/or by whom, simply through spying on outgoing traffic. How large this issue might be and whether strong work-arounds might be available, I have not investigated.


Additional functionality

Applications that provide additional functionality over the Internet have a valid reason to request an Internet connection; however, this connection must only be necessary when the user actually uses the additional functionality. Further, care should be taken that this functionality is not provided over the Internet for spurious reasons: If it could be easily integrated in the application to begin with, this would usually be the better option.

(A good example is a computer game that is playable offline or in a LAN, but which can also be used to play strangers over the Internet.)