How do we develop artificial intelligence (AI) systems that adhere to the norms and values of our human practices? Is it a promising idea to develop systems based on the principles of normative frameworks such as consequentialism, deontology, or virtue ethics? According to many researchers in machine ethics – a subfield exploring the prospects of constructing moral machines – the answer is yes. In this paper, I challenge this methodological strategy by exploring the difference between normative ethics – its use and abuse – in human practices and in the context of machines. First, I discuss the purpose of normative theory in human contexts; its main strengths and drawbacks. I then describe several moral resources central to the success of normative ethics in human practices. I argue that machines, currently and in the foreseeable future, lack the resources needed to justify the very use of normative theory. Instead, I propose that machine ethicists should pay closer attention to the multifaceted ways normativity serves and functions in human practices, and how artificial systems can be designed and deployed to foster the moral resources that allow such practices to prosper.
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
Tel.: +1 703 830 6300
Fax: +1 703 830 2300 email@example.com
(Corporate matters and books only) IOS Press c/o Accucoms US, Inc.
For North America Sales and Customer Service
West Point Commons
Lansdale PA 19446
Tel.: +1 866 855 8967
Fax: +1 215 660 5042 firstname.lastname@example.org