Desktop version

Home arrow Philosophy arrow The Onlife Manifesto

The Design of the Onlife Experience

The concept of design can be understood as the act of working out the shape of objects: we actually mould the form of products and processes, together with the structure of spaces and places, so as to comply with regulatory frameworks. Such a shaping is not necessarily digital: as mentioned above in Sect. 3, consider the installation of speed bumps in roads as a means to reduce the velocity of cars (lest drivers opt to destroy their own vehicles). Still, the information revolution has obliged policy makers to forge more sophisticated ways of legal enforcement through the design of ICT interfaces, default settings, self-enforcing technologies, and so forth. According to the phrasing of Norman Potter in his 1968 book on What is a Designer (new ed. 2002), a crucial distinction should be stressed between designing spaces (environmental design), objects (product design), or messages (communication design). Moreover, in their work on The Design with Intent Method (2010), Lockton, Harrison and Stanton describe 101 ways in which products can influence the behaviour of their users. In light of Fig. 4, it suffices to focus on three different ways in which governance actors may design the onlife experience.

First, design may aim to encourage the change of social behaviour. Think about the free-riding phenomenon on P2P networks, where most peers tend to use these systems to find information and download their favourite files without contributing to the performance of the system. Whilst this selfish behaviour is triggered by many properties of P2P applications, like anonymity and hard traceability of the nodes, designers have proposed ways to tackle the issue through incentives based on trust ( e.g., reputation mechanisms), trade ( e.g., services in return), or alternatively slowing down the connectivity of the user who does not help the process of file-sharing (Glorioso et al. 2010). For example, two very popular P2P systems, namely µTorrent and Azureus/Vuze, have inbuilt anti-leech features that cap the download speed of the users, if their upload speed is too low (note that a low upload speed may in turn hinge on the policy of some ISPs that count both uploads and downloads as monthly data quota). In addition, design mechanisms can induce the change of people's behaviour via friendly interfaces, location-based services, and so forth. These examples are particularly relevant because encouraging individuals to change their behaviour prevents risks of paternalism, when the purpose of design is to encourage such a change of behaviour by widening the range of choices and options. At its best, this latter design policy is illustrated by the open architecture of a web “out of control” (Berners-Lee 1999).

Second, design mechanisms may aim to decrease the impact of harm-generating behaviour rather than changing people's conduct, that is, the goal is to prevent the impoverishment of the agents and of the whole infosphere, rather than directly promoting their flourishing. This further aim of design is well represented by efforts in security measures that can be conceived of as a sort of digital airbag: as it occurs with friendly interfaces, this kind of design mechanism prevents claims of paternalism, because it does not impinge on individual autonomy, no more than traditional airbags affect how people drive. Contrary to design mechanisms that intend to broaden individual choices, however, the design of digital airbags may raise issues of strong moral and legal responsibility, much as conflicts of interests. A typical instance is given by the processing of patient names in hospitals via information systems, where patient names should be kept separated from data on medical treatments or health status. How about users, including doctors, who may find such mechanism too onerous? Furthermore, responsibility for this type of mechanisms is intertwined with the technical meticulousness of the project and its reliability, e.g., security measures for the informative systems of hospitals or, say, an atomic plant. Rather than establishing the overall probability of a serious accident, focus should be here on the weaknesses in the safety system, ranking the accident sequences in connection with the probability of their occurrence, so as to compare different event sequences and to identify critical elements in these sequences. All in all, in Eugene Spafford's phrasing, it would be important that governance actors, sub specie game designers, fully understand that “the only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards— and even then I have my doubts” (in Garfinkel and Spafford 1997).

Third, there is the most critical aim of design, namely to prevent harm generating-behaviour from occurring through the use of self-enforcing technologies, such as DRMs in the field of intellectual property protection, or some versions of automatic privacy by design ( e.g., Cavoukian 2010). Of course, serious issues of national security, connectivity and availability of resources, much as child pornography or cyber-terrorism, may suggest endorsing such type of design mechanism, though the latter should be conceived as the exception, or last resort option, for the governance of the onlife experience. Contemplate some of the ethical, legal, and technical reasons that make problematic the aim of design to automatically prevent harmful conduct from occurring. As to the ethical reasons, specific design choices may result in conflicts between values and, vice versa, conflicts between values may impact on the features of design: we have evidence that “some technical artefacts bear directly and systematically on the realization, or suppression, of particular configurations of social, ethical, and political values” (Flanagan et al. 2008). As to the legal reasons against this type of design policy, the development and use of self-enforcing technologies risk to curtail both collective and individual autonomy severely. Basic tenets of the rule of law would be at risk, since people's behaviour would unilaterally be determined on the basis of technology, rather than by choices of the relevant political institutions: what is imperilled is “the public understanding of law with its application eliminating a useful interface between the law's terms and its application” (Zittrain 2007).

Finally, attention should be drawn to the technical difficulties of achieving such total control through design: doubts are cast by “a rich body of scholarship concerning the theory and practice of 'traditional' rule-based regulation [that] bears witness to the impossibility of designing regulatory standards in the form of legal rules that will hit their target with perfect accuracy” (Yeung 2007). Indeed, there is the technical difficulty of applying to a machine concepts traditionally employed by lawyers, through the formalization of norms, rights, or duties: after all, legal safeguards often present highly context-dependent notions as, say, security measures, personal data, or data controllers, that raise a number of relevant problems when reducing the informational complexity of a legal system where concepts and relations are subject to evolution (Pagallo 2010). To the best of my knowledge, it is impossible to program software so as to prevent forms of harm generating-behaviour even in such simple cases as defamations: these constraints emphasize critical facets of design that suggest to reverse the burden of proof when the use of allegedly perfect selfenforcing technologies is at stake. In the wording of the US Supreme Court's decision on the Communications Decency Act (“CDA”) from 26 June 1997, “as a matter of constitutional tradition, in the absence of evidence to the contrary, we presume that governmental regulation… is more likely to interfere with the free exchange of ideas than to encourage it.”

 
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >

Related topics