
Ebook: Digital Enlightenment Yearbook 2014

Tracking the evolution of digital technology is no easy task; changes happen so fast that keeping pace presents quite a challenge. This is, nevertheless, the aim of the Digital Enlightenment Yearbook.
This book is the third in the series which began in 2012 under the auspices of the Digital Enlightenment Forum. This year, the focus is on the relationship of individuals with their networks, and explores “Social networks and social machines, surveillance and empowerment”. In what is now the well-established tradition of the yearbook, different stakeholders in society and various disciplinary communities (technology, law, philosophy, sociology, economics, policymaking) bring their very different opinions and perspectives to bear on this topic.
The book is divided into four parts: the individual as data manager; the individual, society and the market; big data and open data; and new approaches. These are bookended by a Prologue and an Epilogue, which provide illuminating perspectives on the discussions in between. The division of the book is not definitive; it suggests one narrative, but others are clearly possible.
The 2014 Digital Enlightenment Yearbook gathers together the science, social science, law and politics of the digital environment in order to help us reformulate and address the timely and pressing questions which this new environment raises. We are all of us affected by digital technology, and the subjects covered here are consequently of importance to us all.
Digital tools are changing the world we live in. Economically, they are transforming countless sectors: how we buy, consume and receive; business and distribution models; the products and services we enjoy. In the meantime, they raise many issues for society, too: from privacy and the right to be forgotten to the impact on employment – and how to ensure a workforce able to benefit from tomorrow's digital jobs. From the smartphone to the social network, we are increasingly taking new digital tools for granted. We are slowly discovering what implications they have for us – how we interact, how we transact; even how we legislate. They challenge old systems and old ways of thinking.
In this context, there are two points I would like to make.
First, these changes are inevitable. The benefits on offer are so great and the competition to achieve them so fierce that it is not in question whether or not these changes will happen; the question is when, and how. So we need to have our response prepared. I want Europe to be in the lead and in control, and that means not opposing or blocking change, but understanding and mastering it.
Second, if we understand these changes properly, we can ensure that they provide opportunities, and not just threats. In an area such as privacy, for example, technology can be a tool empowering people to take control. Digital technology holds an enormous positive potential for people, and society: in education or energy, in development or healthcare.
It is not policymakers who will come up with the innovations or create the jobs that new technology allows, but we can ensure that the right networks and frameworks are in place. Evidence and academic understanding can help us to do that, so that our policies do not limit innovations, even if they are disruptive. Research and data can increase our understanding of their impact on society, and the opportunity that they hold.
Change is both inevitable and essential to our future development as a society. Publications like this help us deal more effectively with such change. Therefore, I welcome the contributions in this yearbook.
The big data revolution, like many changes associated with technological advancement, is often compared to the industrial revolution to create a frame of reference for its transformative power, or portrayed as altogether new. This article argues that between the industrial revolution and the digital revolution is a more valuable, yet overlooked period: the probabilistic revolution that began with the avalanche of printed numbers between 1820 and 1840. By comparing the many similarities between big data today and the avalanche of numbers in the 1800s, the chapter situates big data in the early stages of a prolonged transition to a potentially transformative epistemic revolution, like the probabilistic revolution. As scholars and policymakers consider the type of newness presented by big data, the comparative exercise utilizes history to organize, address, and refine issues emerging today.
Technologies that enable us to capture and publish data with ease are likely to pose new concerns about privacy of the individual. In this article we examine the privacy implications of lifelogging, a new concept being explored by early adopters, which utilises wearable devices to generate a media rich archive of their life experience. The concept of privacy and the privacy implications of lifelogging are presented and discussed in terms of the four key actors in the lifelogging universe. An initial privacy-aware lifelogging framework, based on the key principles of privacy by design is presented and motivated.
The use of personal data has incredible potential to benefit both society and individuals through increased understanding of behaviour, communication and support for emerging forms of socialisation and connectedness. However, there are risks associated with disclosing personal information, and present systems show a systematic asymmetry between the subjects of the data and those who control and manage the way that data is propagated and used. This leads to a tension between a desire to engage with online society and enjoy its benefits on one hand, and a distrust of those with whom the data is shared on the other. In this chapter, we explore a set of obfuscation techniques which may help to redress the balance of power when sharing personal data, and return agency and choice to users of online services.
The law is not effectively addressing the potential harm caused by certain digital processing of personal data, as illustrated by the recent Google Spain judgment. A better way of dealing with privacy harms is to regulate at the point in the information lifecycle where the potential for harm is created. This article will review the Google Spain decision and reactions to it, consider the implications for new technologies that facilitate surveillance and profiling by individuals, such as Google Glass, and suggest that permitting limited privacy vigilantism might have a role to play in the solution.
This chapter explores the potential for market-driven personal data empowerment. It begins by discussing the results of a recent study into the economic and business case for personal information management services (PIMS), providing an overview of the variety of services and some key market developments. The following sections consider the problems that PIMS might be able solve, and the extent to which their solutions could be genuinely empowering, by reference to some key aspects of Enlightenment thought, starting with Adam Smith's notion of enlightened self-interest, which later developed into the concept of homo economicus at the heart of classical economics. I argue that this ideal became ever less realistic as the economic system it theoretically justified became ever more complex, with the result that in many markets, businesses profit from the fact that individuals are ill-equipped to deal with said complexity. I propose that the seeds of a solution can be found in the notion of an ‘ideal observer’, also originating in Enlightenment thought; a hypothetical ideal version of an individual, with perfect knowledge and rationality, whose perspective and insight can steer its ordinary, fallible human counterpart towards better decision and action. In so far as PIMS can help individuals approximate this abstract agent, they offer the possibility of genuine enlightenment and empowerment, fit for the complexity of modern consumer markets.
This chapter addresses the privacy and security challenges presented by the rapid evolution of the Internet of Things. The exponential growth of potentially sensitive consumer data flowing from a wide range of newly connected devices has the potential to help solve societal challenges and improve consumers' lives, but such vast quantities of data also present privacy and security risks. The chapter describes some of the potential insights that can be gained through connected devices, and then lays out two risks posed by the Internet of Things: the potential for the use of sensitive personal data to make decisions about consumers without safeguards to protect them, and the possibility that privacy concerns will prevent consumers from embracing and benefitting from the Internet of Things. The chapter also discusses behind-the-scenes data collectors, known as data brokers, who play a key role in how information from connected devices can be collected and used in surprising and potentially disconcerting ways. The chapter proposes improved practices for device and service providers, including privacy by design; robust deidentification of consumer data; and transparency measures, such as clear and concise notices. The chapter also calls for improved practices—and ultimately legislation—to provide greater transparency and accountability within the data broker industry.
As governments increasingly deliver services over the Internet, the opportunities for monitoring and surveillance of society increase. In public services to support the vulnerable, such as welfare, monitoring and surveillance functionality is often regarded by system designers as important components in defences against fraud and system misuse. However, the responses from the participants in this study demonstrate the potential difficulty of deploying such approaches when the systems themselves are perceived as working against not with the communities and indicate that supportive social networks are a prerequisite for these the technological systems to be secure. We explored the case of the use of the Internet to deliver parts of the UK welfare system from the perspective of an economically and socially deprived community in the North East of England. The findings show that, in the views of the research participants, reliance on technological security mechanisms makes the underlying administrative processes less rather than more secure. The findings also show that a focus on system security and monitoring rather than benevolence and user empathy is a barrier to the successful delivery of ‘digital by default’ services and can increase the overall feelings of insecurity in everyday life for service users. Our conclusion is that rather than being regarded as a technical system, such a service is better conceptualised as a social system with technological elements embedded within it. We therefore also argue that if such technological systems are to be secure, then the service design must also support the social networks that interact with these systems. We further argue that service providers must work with individual communities to develop and support the social networks in order for the technological security controls to be effective.
Personal data is increasingly important as a means for brands to mediate their relationships with consumers. A huge industry has built up around the collection, aggregation, trading and analytics of this data in a bid to generate growth for brands. Much of the agenda has implicitly assumed a linear relationship between the increased use of personal data and business growth. But research on consumer responses to increasing personalisation of marketing communications finds evidence for an ‘uncanny valley’ which can lead to consumers becoming less engaged with the brand. The paper uses the Johari Window as a model for exploring the interpersonal nature of consumer brand relationships and discusses the need for a greater focus on psychological, social and cultural factors that determine the way in which relationships are mediated through the use of personal data.
We are starting to see examples where big data analytics are used for humanitarian goals and social development. Amongst others, Data for Development (D4D), the project set up by Orange in Ivory Coast and Senegal, demonstrated the value and feasibility of sharing mobile data from private companies in a responsible way. It uses a risk-based approach to conciliate the respect of users' privacy, while enabling open innovation in the service of local development and humanitarian interventions. This chapter will argue that big data technologies are able to disrupt the way we address development goals, and that their responsible usage is an extraordinary opportunity as we face increasing challenges for the evolution of the world at large. It is therefore a necessity for those in the field to find a way to develop the domain of Data for Development, by defining appropriate regulations, opening access to multiple data sets, or setting up the necessary processes and data sharing culture. This will contribute to the improvement of public policies and the creation of new services to improve the access to knowledge, and foster collaboration between people for the welfare and resilience of our societies.
We are moving from the world of relevance, where search engines dominate the web and users consume passively, to a world of resonance, where the user is a key co-creator. Data is at the heart of this transition, and particularly Open data is having a significant impact on the evolution of our social models. I have observed that open data has the power to engage citizens in important new ways, leading to increased collaboration and participation. At a time when populations express dissatisfaction with Government, any positive engagement is worthy of exploration and study. I will chart the recent history of open data and illustrate outcomes of several of the key events. Revelations that governments are routinely harvesting citizens' private data, as well as debates about the secondary use of data they hold, have impacted the open data agenda. These events have alerted citizens to the potential impact on their digital life, and have prompted civil society movements to join and enrich the debate. This additional collaboration is now moving the agenda towards policy co-creation with the citizen. With the use of social machines there is a real opportunity to advance towards an inclusive social model that delivers on the promise of web science: ‘It’s for everyone’.
It is easier than ever to collect data about all kinds of aspects of daily life. This data can have significant value for the people who produce it, but also for corporations and governments. This chapter investigates how this process of ‘datafication’ operates in the space of the city, where globalization and other political-economic pressures have shifted the notion and practice of citizenship. Through an investigation of the concepts of ‘openness’ and ‘transparency’ the chapter highlights some significant shifts in the power relations related to data collection and use, and identifies directions for future research and policy making in this area.
The social machines paradigm provides a lens onto the interacting sociotechnical systems of our hybrid digital-physical world, citizen-centric and at scale. It facilitates our understanding of new social processes and provides a model for the design of future systems as we move into a world of pervasive technology deployment and increasing automation. The scale and hybridity of social machines demand newmethodological approaches that are set to underpin tomorrow's ‘normal science’.
Part of the power of social computation comes from using the collective intelligence of humans to tame the aggregate uncertainty of (otherwise) low veracity data obtained from human and automated sources. We have witnessed a surge in development of social computing systems but, ironically, there have been few attempts to generalise across this activity so that creation of the underlying mechanisms themselves can be made more social. We describe a method for achieving this by standardising patterns of social computation via lightweight formal specifications (we call these social artifacts) that can be connected to existing internet architectures via a single model of computation. Upon this framework we build a mechanism for extracting provenance meta-data across social computations.
In previous work, we have introduced the data environment as a powerful explanatory concept in the realm of data privacy, with specific regard to the concepts of anonymisation and statistical disclosure. Here, we explain the concept more fully and examine how the socio technical system, that is titled social media, embodies and transforms the social data environment. We draw on both a social philosophy and sociological framework, but populate that framework with both agents and artefacts. We argue, that the data environment concept is an important organising principal, which has implications in epistemology and ontology, and for how we understand notions like disclosure, identity, society and perhaps most critically, privacy.
The article takes an interdisciplinary look at how digital technologies can have an impact on today's society. The field of Media and Communication Studies (MCS), in particular, can help in various ways towards better understanding the relationship between social media, privacy and empowerment. We discuss how MCS can enrich technological privacy research on these topics, in the way that it looks at new media as (technological) tools for mediation. The latter perspective incorporates three inextricable and mutually determining components: artefacts, practices and social arrangements. This threefold approach offers a particular value for better understanding privacy in technologically mediated communications, while avoiding the reductionist view of technological determinism. To illustrate our viewpoint we use the example of the role of algorithms in the social interaction between people. We conclude that a multifaceted perspective on media and communication between people changes and broadens the framing of online privacy and user empowerment. It also helps to delineate a realistic and multidimensional picture of users and their new media usage. In that way we are able to frame people's communication behaviour more holistically, taking fully into account their attitudes, beliefs, values and practices in digital systems.
The emergence of big data and the data revolution raises a number of governance concerns about the nature and use of data. This chapter describes nine such concerns that an international group of data experts articulated and explored during a series of online discussions devoted to that issue. I then conclude the chapter by arguing against a common interpretation of evidence-based policy decisions – namely, the use of data to try to justify or promote public policy proposals – and in favor of a more critical, and self-critical, approach to evidence-based public policy decisions that uses data to criticize policy proposals instead of trying to justify them. I also argue that we should pay greater attention to the underlying philosophical beliefs, concerns, goals, values, interests, and priorities that motivate them.