The theme of this volume is the multi-faceted ‘computational turn’ that is occurring through the interaction of the disciplines of philosophy and computing. In computer and information sciences, there are significant conceptual and methodological questions that require reflection and analysis. Moreover, digital, information and communication technologies have had tremendous impact on society, which raises further philosophical questions. This book tries to facilitate the task to continuously work to ensure that its diversity of perspectives and methods proves a source of strength and collaboration rather than a source of instability and disintegration. The first three contributions explore the phenomenon of virtual worlds. The next four focus on robots and artificial agents. Then a group of chapters discusses the relation between human mentality and information processing in computers and the final section covers a broad range of issues at the interface of computers and society.
This volume collects eighteen essays presented at the fifth annual European Conference on Computing and Philosophy (ECAP) held June 21–23, 2007, at the University of Twente, the Netherlands. It represents some of the best of the more than eighty papers delivered at the conference. The theme of ECAP 2007 was the multi-faceted “computational turn” that is occurring through the interaction of the disciplines of philosophy and computing . It was organized under the supervision of the International Association of Computing and Philosophy (IACAP). IACAP promotes scholarly dialogue and research on all aspects of the computational and informational turn, and on the use of information and communication technologies in the service of philosophy.
There are good reasons for both computer and information scientists and philosophers to do research at the intersection of computing and philosophy. In computer and information science, there are significant conceptual and methodological questions that require reflection and analysis. What, for example, is information? What are good methods in computer science, and can computer science be a genuine science? Is true artificial intelligence possible? These are questions asked by computer scientists and philosophers alike. Moreover, digital and information and communication technologies have had tremendous impact on society, which raises further philosophical questions. How, for example, are digital technologies changing our conception of reality, or of knowledge and information? How is the Internet changing human life, and is it creating a digital existence for us next to a physical existence? What are the ethical questions that both computer scientists and users of information technology are facing?
ECAP 2007, with over a hundred participants, has left us, the editors, with a sense that the multidisciplinary constellation of computing and philosophy is a vibrant and dynamic field, which has its best days still ahead of it. The manifold explorations at the intersection of computing and philosophy are yielding insights of both intrinsic interest and societal value. Yet, this multidisciplinary endeavor also presents a challenge. Like other such endeavors, it must continuously work to ensure that its diversity of perspectives and methods proves a source of strength and collaboration rather than a source of instability and disintegration. In short, we must always strive to communicate across boundaries.
It is our hope that the present volume facilitates this task. This raises a more specific challenge facing us as editors. Regrettably, we simply could not include all of the papers presented at ECAP. In making our selection, our guiding ambition was to create a snapshot of the field that would be of interest to an audience as diverse as ECAP itself. With that goal in mind, we have compiled top quality, accessible essays representing many of the most important areas of inquiry in the field.
In organizing the essays, we have taken a topical approach. The first three contributions explore the phenomenon of virtual worlds. The essays discuss ethical, anthropological, and ontological issues regarding such worlds and the avatars that inhabit them. The next four chapters focus on robots and artificial agents. They cover issues regarding human-robot interaction, agency in robots, and the social and ethical aspects of robotics, including military applications. The next group of chapters discusses the relation between human mentality and information processing in computers. The essays consider the nature of representations in digital systems, the relations between data, information and knowledge, the relationships between computers and their users, and the nature of synthetic emotions. The final section covers a broad range of issues at the interface of computers and society. The cases discussed here include the educational potential of an intelligent tutoring system and a novel computer programming language, the integration of ethical principles into software design, the underrepresentation of women in computer science studies, and the way Internet users assess the trustworthiness of the information they encounter online.
We would like to thank IACAP and the ECAP steering committee for giving us the opportunity to organize the conference in 2007 and for helping us in publishing this volume. For guaranteeing high scientific quality, all contributions of this volume have been read by several referees. Here, we would like to thank them for their time and their vital contribution to this volume. Special thanks go to our Master student Maurice Liebregt who put a substantial amount of time and effort into the layout of the chapters and who very patiently incorporated all last minute corrections.
 L. Burkholder (ed). Philosophy and the Computer. Boulder, San Francisco, Oxford: Westview Press 1992.
After a brief introduction that sets out the overall argument of the paper in summary, the second part of the paper will offer a meta-ethical framework based on the moral theory of Alan Gewirth, necessary for determining what, if any, ought to be the ethics that guide the conduct of people participating in virtual worlds in their roles as designers, administrators and players or avatars. As virtual worlds and the World Wide Web generally, are global in scope, reach and use, Gewirth's theory, which offers a supreme principle of morality, the Principle of Generic Consistency (PGC) that establishes universal rights for all persons always and everywhere, is particularly suitable for this task. The paper will show that persons both in the real world and in virtual worlds have rights to freedom and wellbeing. Strictly with regard to agency those rights are merely prima facie but with regard to personhood framed around the notion of self-respect those rights are absolute. The third and final part of the paper will examine in more practical detail and application why and how designers, administrators and avatars of virtual worlds are rationally committed on the basis of their own intrinsic purposive agency to ethical norms of conduct that require the universal respect of the rights of freedom and well-being of all agents, including their own. Using Alan Gewirth's argument for the Principle of Generic Consistency (Reason and Morality, 1978) and my expanded argument for the PGC in my Ethics Within Reason: A Neo-Gewirthian Approach (2006), the paper will specifically seek to demonstrate that insofar as avatars can be viewed as virtual representations or modes of presentations of real people (at least with regard to some virtual worlds in which the virtual agency of the avatar can be considered an extension of the agency of the person instantiating the avatar in the real world) and thus can and must be perceived as virtual purposive agents, then they have moral rights and obligations similar to those of their real counterparts. Finally, the paper will show how the rules of virtual worlds as instantiated by the designers' code and the administrators' end-user license agreement (EULA), must always be consistent with and comply with the requirements of universal morality as established on the basis of the PGC. When the two come into conflict, the PGC, as the supreme principle of morality, is always overriding.
This paper introduces an alternative view of virtual environments based on an analysis of two opposing views: the Traditional View and the Ecological View. The Traditional View argues for a representational view of perception and action susceptible of being mapped onto virtual settings. The Ecological View, which is inspired by Gibson's ecological approach to perception, considers that perception and action are inseparable, embodied processes that do not imply mental representations. The alternative view put forward here claims for an articulation of the opposing views, namely the Ecological/Representational view of virtual environments, providing the notion and levels of representation implied in perceptual and agentic processes is functionally defined.
There seems to be a difference in the way we interact with reality and a reality experienced while playing computer games. I will argue that one of the most important features that distinguishes external world (or Open Reality) from reality experienced while playing computer games (or Closed Reality) is the degree of complexity, that is, the richness of the stimuli and the number of options available. One of the main consequences of the lower complexity of Closed Reality is that playing computer games triggers different cognitive alterations in an effortless and automatic manner. The question I ask is what really changes in our cognitive processing when we play computer games. One of the answers is that there is a change in the agent's cognitive representation of reality. Additionally I will suggest that there seems to be a change in the cognitive self while playing avatar-based computer games. I will discuss the last point in the brief context of identity problem and possible psychological implications.
The question of whether a robot can communicate with human beings evokes another question: How can human beings have the feeling that they are usually successful in mutual communication? This question may be answered because the emergence of the mind of individuals and ‘the mind of community’ are not completely separable. The mind of the community may precede the mind of the individual in society. Complex mechanisms of the emergence of the mind of the community and that of the individual may be effectively studied with Cognitive Robotics in Japan. To promote the study, I develop a hypothesis named “a fabulous game of human beings,” in which each individual can guess the contents of her/his mind effectively by reading the attitudes and mind of others.
Robotics has progressed substantially over the last 20 years, moving from simple proof-of-concept experimental research to developing market and military technologies that have significant ethical consequences. This paper provides the reflections of a roboticist on current research directions within the field and the social implications associated with its conduct.
While modern states may never cease to wage war against one another, they have recognized moral restrictions on how they conduct those wars. These “rules of war” serve several important functions in regulating the organization and behavior of military forces, and shape political debates, negotiations, and public perception. While the world has become somewhat accustomed to the increasing technological sophistication of warfare, it now stands at the verge of a new kind of escalating technology–autonomous robotic soldiers–and with them new pressures to revise the rules of war to accommodate them. This paper will consider the fundamental issues of justice involved in the application of autonomous and semiautonomous robots in warfare. It begins with a review of just war theory, as articulated by Michael Walzer , and considers how robots might fit into the general framework it provides. In so doing it considers how robots, “smart” bombs, and other autonomous technologies might challenge the principles of just war theory, and how international law might be designed to regulate them. I conclude that deep contradictions arise in the principles intended to govern warfare and our intuitions regarding the application of autonomous technologies to war fighting.
The concept of autonomous artificial agents has become a pervasive feature in computing literature. The suggestion that these artificial agents will move increasingly closer to humans in terms of their autonomy has reignited debates about the extent to which computers can or should be considered autonomous moral agents. This article takes a closer look at the concept of autonomy and proposes to conceive of autonomy as a context-dependent notion that is instrumental in understanding, describing and organizing the world. Based on the analysis of two distinct conceptions of autonomy, the argument is made that the limits to the autonomy of artificial agents are multiple and flexible dependent on the conceptual frameworks and social contexts in which the concept acquires meaning. A levelling of humans and technologies in terms of their autonomy is therefore not an inevitable consequence of the development of increasingly intelligent autonomous technologies, but a result of normative choices.
One of the basic principles of the general definition of information is its rejection of dataless information. In general, it is implied that “there can be no information without physical implementation” . Though this is usually considered a commonsensical assumption, many questions arise with regard to its general application. In this paper, a combined logic for data and information is elaborated, and specifically used to investigate the consequences of restricted and unrestricted data-implementation principles.
In this essay, the relation between computers and their human users will be analyzed from a philosophical point of view. I will argue that there are at least two philosophically interesting relationships between humans and computers: functional and phenomenal relationships. I will first analyze the functional relationship between computers and humans. In doing this, I will abstract from ordinary functions of computers, such as word processor, information provider, and gaming device, to arrive at a generalized account of the functional relationship between humans and computers. Next, I will explore the phenomenal relationship between humans and computers, which is the way in which computers transform our experience of and interaction with our environment or world. Both analyses, I will argue, point to a dual role of computers for humans: a cognitive role, in which the computer functions as a cognitive device that extends or supplements human cognition, and an ambient role, in which the computer functions as a simulation device that simulates objects and environments.
Emotions and feelings are basic regulators of human activity. We consider that intelligence is an emergent property of systems and that emotions play a basic role within those systems. Our main aim is to create a system (called The Panic Room) based on a bottom-up “Ambient Intelligence” context which develops a proto-emotion of fear and pleasure in order to detect a dangerous event and react to it. The system labels the signals from the sensors which describe the surroundings as either negative or positive. Either option has a specific signal that is used to change the way further perceptual signals will be processed as well as generate possible behavioural responses to a possible danger. Responses are automatic and embedded (or hardwired) in the system.
Cognition is commonly taken to be computational manipulation of representations. These representations are assumed to be digital, but it is not usually specified what that means and what relevance it has for the theory. I propose a specification for being a digital state in a digital system, especially a digital computational system. The specification shows that identification of digital states requires functional directedness, either for someone or for the system of which the state is a part. In the case of digital representations, the function of the type is to represent, that of the token just to be a token of that representational type.
An emerging alternative to the problem of knowledge looks towards information as playing a critical role in support of an externalist epistemology, a new theory of knowledge that need not rely upon the traditional but problematic tenets of belief and justification. In support of this information-theoretic epistemology, the relationship between information and knowledge both within philosophical and information technology scholarship has been viewed as an asymmetric one. This relationship is captured by the commonsense view that objective semantic information is prior to and encapsulated by knowledge . This paper develops an argument that challenges this asymmetric assumption. Drawing on the ideas of Gareth Evans  and Timothy Williamson  we shall argue that (at least in some cases) a coextensive relationship must exist between information and knowledge. We conclude with the view that this relationship throws up problems similar to those discussed by Quine  in relation to confirmation holism.
We justify the need for better accounts of object recognition in artificial and natural intelligent agents and give a critical survey of the computational-postcomputational schism within the sciences of the mind. The enactive, dynamicist account of conscious perception is described as avoiding many problems of cognitivist functionalism, behaviourism, representationalism, emergentism, and dualism. We formalize the basic structure of the enactive, dynamicist theory of phenomenal consciousness and criticize the externalist presupposition of outside-world objects in this kind of theory. As a remedy, we suggest a sensorimotor account of objectual constitution which assigns an epistemic but not necessarily ontic priority to sense data.
This article reports on recent efforts to develop an intelligent tutoring system for proof construction in propositional logic. The report centers on data derived from an undergraduate, general education course in Deductive Logic taught at the University of North Carolina at Charlotte. Within this curriculum, students use instructional java applets to practice state-transition problem solving, truth functional analysis, proof construction, and other aspects of propositional logic. Two project goals are addressed here: 1) identifying at-risk students at an early stage in the semester, and 2) generating a visual representation of student proof efforts as a step toward understanding those efforts. Also discussed is the prospect for developing a Markov Decision Process approach to providing students with individualized help.
Logic has long set itself the task of helping humans think clearly. Certain computer programming languages, most prominently the Logo language, have been billed ashelping young people become clearer thinkers. It is somewhat doubtful that such languages can succeed in this regard, but at any rate it seems sensible to explore an approach to programming that guarantees an intimate link between the thinking required to program and the kind of clear thinking that logic has historically sought to cultivate. Accordingly, Bringsjord has invented a new computer programming language, Reason, one firmly based in the declarative programming paradigm, and specifically aligned with the core skills constituting clear thinking. Reason thus offers the intimate link in question to all who would genuinely use it.
The paper offers an analysis of the problem of integrating ethical principles into the practice of software design. The approach is grounded on a review of the relevant literature from Computer Ethics and Professional Ethics. The paper is divided into four sections. The first section reviews some key questions that arise when the ethical impact of computational artefacts is analysed. The inner informational nature of such questions is used to argue in favour of the need for a specific branch of ethics called Information Ethics. Such ethics deal with a specific class of ethical problems and Informational Privacy is introduced as a paradigmatic example. The second section analyses the ethical nature of computational artefacts. This section highlights the fact that this nature is impossible to comprehend without first considering designers, users, and patients alongside the artefacts they create, use and are affected by. Some key ethical concepts are discussed, such as freedom, agency, control, autonomy and accountability. The third section illustrates how autonomous computational artefacts are rapidly changing the way in which computation is used an perceived. The description of the ethical challenges posed to software engineers by this shift in perspective closes the section. The fourth and last section of the paper is dedicated to a discussion of Professional Ethics for software engineers. After establishing the limits of the professional codes of practice, it is argued that ethical considerations are best embedded directly into software design practise. In this context, the Value Sensitive Design approach is considered and insight into how this is being integrated into current research in ethical design methodologies is given.
An overview of recent research concerning the underrepresentation of women in computer science studies indicates that this problem might be more complex than previously assumed. The percentage of female computer science students varies from country to country, and there is also some indication that gender stereotypes are defined differently in different cultures. Gender stereotypes concerning technology are deeply embedded in the dominant culture and often contradictory. Only a few general assertions can be made about the development of the inclusion or exclusion of women from computer science. In addition, there does not seem to be a specific female style of computer usage. Concepts of diversity and ambivalence seem to be more appropriate but difficult to realize. All this makes the development of appropriate interventions for overcoming the underrepresentation of women in computer science studies a very complex process.
Several studies have addressed the issue of what makes information on the World Wide Web credible. Understanding how we select reliable sources of information and how we estimate their credibility has been drawing an increasing interest in the literature on the Web. In this paper I argue that the study of information search behavior can provide social and cognitive scientists with an extraordinary insight into the processes mediating knowledge acquisition by epistemic deference. I review some of the major methodological proposals to study how users judge the reliability of a source of information on the World Wide Web and I propose an alternative framework inspired by the idea that–as cognitively evolved organisms–we adopt strategies that are as effortless as possible. I argue in particular that Web users engaging in information search are likely to develop simple heuristics to select in a cognitively efficient way trustworthy sources of information and I discuss the consequences of this hypothesis and related research directions.
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
Tel.: +1 703 830 6300
Fax: +1 703 830 2300 email@example.com
(Corporate matters and books only) IOS Press c/o Accucoms US, Inc.
For North America Sales and Customer Service
West Point Commons
Lansdale PA 19446
Tel.: +1 866 855 8967
Fax: +1 215 660 5042 firstname.lastname@example.org