So-called “killer robots”, i.e., lethal autonomous weapon systems capable of selecting and attacking military targets without human intervention or control, are a particularly controversial topic in machine ethics. Many prominent AI researchers and scientists are calling for a ban of these devices. The paper will discuss three ethical objections against LAWS: (1) the argument from the responsibility gap (Sparrow), (2) the argument from human agency (Leveringhaus), and (3) the argument from moral duty (Misselhorn). These three arguments raise fundament ethical concerns about LAWS. They are supposed to show that lethal autonomous weapon systems would not just have morally bad consequences but that the use of killer robots is morally wrong in itself.
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
Tel.: +1 703 830 6300
Fax: +1 703 830 2300 email@example.com
(Corporate matters and books only) IOS Press c/o Accucoms US, Inc.
For North America Sales and Customer Service
West Point Commons
Lansdale PA 19446
Tel.: +1 866 855 8967
Fax: +1 215 660 5042 firstname.lastname@example.org