The Public Sphere in a Computational Era
The Public(s) Onlife
A Call For Legal Protection by Design
Onlife After the Computational Turn?
1.1 Computational Turn
In my inaugural lecture I have reiterated the notion of a computational turn, referring to the novel layers of software that have nested themselves between us and reality (Hildebrandt 2013). These layers of decisional algorithmic adaptations increasingly co-constitute our lifeworld, determine what we get to see (search engines; behavioural advertising), how we are treated (insurance, employment, education, medical treatment), what we know (the life sciences, the digital humanities, expert systems in a variety of professions) and how we manage our risks (safety, security, aviation, critical infrastructure, smart grids). So far, this computational turn has been applauded, taken for granted or rejected, but little attention has been paid to the far-reaching implications for our perception and cognition, for the rewoven fabric on which our living-together hinges (though there is a first attempt in Ess and Hagengruber 2011, and more elaboration in Berry 2012). The network effects of ubiquitous digitization have been described extensively (Castells 2011; Van Dijk 2006), though many authors present this as a matter of 'the social', neglecting the extent to which the disruptions of networked, mobile, global digital technologies are indeed 'affordances' of the socio-technical assemblages of 'the digital'. Reducing these effects to 'the social' does not help, because this leaves the constitutive and regulative workings of these technologies under the radar. Moreover, we need to distinguish between digitization per se and computational techniques such as machine learning that enable adaptive and proactive computing and thereby present us with an entirely novel—smart—environment.
1.2 Smart Environments
I believe that whereas such smart environments have long remained a technological fantasy, they are now with us, around us, even inside us (Hildebrandt and Anrig 2012). They anticipate our future behaviours and adapt their own behaviours to accommodate our inferred preferences—at least insofar as this fits the objectives of whoever is paying for them (commercial enterprise, government). They provide us with a ubiquitous artificial intelligence that uproots the common sense of our Enlightenment heritage that matter is passive and mind active. Matter is developing into mind, becoming context-sensitive, capable of learning on the basis of feedback mechanisms, reconfiguring its own programs to improve its performance, developing 'a mind of its own', based on second-order beliefs and preferences. This means nothing less than the emergence of environments that have agent-like characteristics: they are viable, engines of abduction, and adaptable (Bourgine and Varela 1992); they are context-sensitive, responsive, and capable of sustaining their identity by reconfiguring the rules that regulate their behaviours (Floridi and Sanders 2004). We note, of course, that so far 'they' are not consciously aware of any of this, let alone self-conscious. Also, let's acknowledge that we are not talking about what Clark (2003) termed 'skinbags': neatly demarcated entities that contain their mind within their outer membranes, surface or skin. The intelligence that emerges from the computational layers is engineered to serve specific purposes, while thriving on the added value created by unexpected function creep; it derives from polymorphous, mobile computing systems, not from stand-alone devices such as those fantasised in the context of humanoid robotics.
1.3 What's New Here?
In what sense is this a novel situation? Where lies the continuity with preceding information and communication technologies? In his magnificent Les technologies de l'intelligence Pierre Lévy (1990) discussed the transitions from orality to script, printing press and mass media towards digitisation and the internet. Summing up, Lévy suggests that we are in transition from a linear sense of time to segments and points; from accumulation to instant access; from delay and duration to real-time and immediacy; from universalization to contextualisation; from theory to modelling; from interpretation to simulation; from semantics to syntaxis; from truth to effectiveness; from semantics to pragmatics; from stability to change. Interestingly, his focus is on ubiquitous computing and he highlights the impact of the hyperlink, but hardly engages with the computational intelligence described above. Core to the more recent, ambient intelligence, is the fact that human beings are anticipated by complex, invisible computing systems (Stiegler 2013). Their capacity to generate data derivatives (Amoore 2011) and to pre-empt our intentions on the basis of personalised inferences creates what Catherine Dwyer (2009) has called 'the inference problem'. The thingness of our artificial environment seems to turn into a kind of subjectivity, acquiring a form of agency. In other work I have suggested that social science has long since recognized the productive nature of the inference problem that nourishes relationships between humans (Hildebrandt 2011a). Notably, sociologists Parsons as well as Luhmann spoke of the so-called double contingency that determines the fundamental uncertainty of human interaction (Vanderstraeten 2007). Since I can never be sure how you will read my words or my actions, I try to infer what you will infer from my behaviours; the same goes for you. We are forever guessing each other's interpretations. Zizek (1991) has qualified the potentially productive nature of this double and mutual anticipation by suggesting that 'communication is a successful misunderstanding'. What is new here is that the computational layer that mediates our access to knowledge and information is anticipating us, creating a single contingency: whereas it has access to Big Data to make its inferences (Mayer-Schonberger and Cukier 2013), we have no such access and no way of guessing how we are being 'read' by our novel smart environments.
1.4 Which Are the Challenges?
If going Onlife refers to immersing ourselves in the novel environments that depend on and nourish the computational layers discussed above, then going Onlife will require new skills, different capacities and other capabilities. To prevent us from becoming merely the cognitive resource for these environments we must figure out how they are anticipating us. We must develop ways to extend the singly contingency to a renewed double contingency. How to read in what ways we are being read? How to guess the manner in which we are being categorized, foreseen and pre-empted? How to keep surprising our environments, how to move from their proaction to our interaction? In other work I have suggested that we need to probe at least two tracks: first, to develop human machine interfaces that give us unobtrusive intuitive access to how we are being profiled, and, second, a new hermeneutics that allows us to trace at the technical level how the underlying algorithms can be 'read' and contested (Hildebrandt 2011b, 2012). For now, the point I would like to make is that the implications of going Onlife cannot be reduced to privacy and data protection. I hope that the previous analysis demonstrates a far more extensive impact that cannot be understood solely in terms of the wish to hide one's personal data. It requires more than that; indeed it challenges us to engage with our environments as if we are taking 'the intentional stance' with them (Dennett 2009).