Abstract
The massively introduction of advanced military technologies makes it important to address ethical issues related to the potential use of lethal autonomous robot systems (LARS) in warfare. Hence, this article sets out to:
1. explore human robot interaction in a military context. Philosophically speaking, artificial agents without inner states can be seen as an obstacle for formation of relations between humans and robots; but from a psychological perspective, soldiers bond with technologies and may even in some situations have good reasons for preferring robots over humans. Nevertheless, one may question whether this observation lent support to the idea of introducing LARS.
2. establish a Moral Military Turing Test (MMTT) as a springboard for a discussion of programming approaches to machine morality. Here, a hybrid model in the shape of a mix between a top-down theoretically driven implementation of a moral framework and a bottom-up adaptive architecture represents a promising approach all though one may doubt whether phronesis is at all computationally tractable.
3. discuss whether one can assign moral standing to machines. In complex technologically mediated contexts, relations of responsibilities are hard to capture with reference to Kantian autonomy as a prerequisite for moral agency. Moving beyond the warfare context, in some contexts it seems worthwhile to allow for moral responsibility to be distributed between human and artificial agent. But, this solution has little to offer in the warfare domain, since here one has to be able to hold individuals responsible in order to acknowledge Würde to the victims of a war.