Over the past years, scholars have increasingly debated over the reasons why we should, or should not, deploy specimens of AI technology, such as robots, on the battlefields, in the market, or at our homes. Amongst the moral theories that discuss what is right, or what is wrong, about a robot's behaviour, virtue ethics, rather than utilitarianism and deontologism, offers a fruitful approach to the debate. The context sensitivity and bottom-up methodology of virtue ethics fits like hand to glove with the unpredictability of robotic behaviour, for it involves a trial-and-error learning of what makes the behaviour of that robot good, or bad. However, even advocates of virtue ethics admit the limits of their approach: All in all, the more societies become complex, the less shared virtues are effective, the more we need rules on rights and duties. By reversing the Kantian idea that a nation of devils can establish a state of good citizens, if they “have understanding,” we can say that even a nation of angels would need the law in order to further their coordination and collaboration. Accordingly, the aim of this paper is not only to show that a set of perfect moral agents, namely a bunch of angelic robots, need rules. Also, no single moral theory can instruct us as to how to legally bind our artificial agents through AI research and robotic programming.
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
Tel.: +1 703 830 6300
Fax: +1 703 830 2300 email@example.com
(Corporate matters and books only) IOS Press c/o Accucoms US, Inc.
For North America Sales and Customer Service
West Point Commons
Lansdale PA 19446
Tel.: +1 866 855 8967
Fax: +1 215 660 5042 firstname.lastname@example.org