In this paper, I discuss what I call a new control problem related to AI in the form of humanoid robots, and I compare it to what I call the old control problem related to AI more generally. The old control problem – discussed by authors such as Alan Turing, Norbert Wiener, and Roman Yampolskiy – concerns a worry that we might lose control over advanced AI technologies, which is seen as something that would be instrumentally bad. The new control problem is that there might be certain types of AI technologies – in particular, AI technologies in the form of lifelike humanoid robots – where there might be something problematic, at least from a symbolic point of view, about wanting to completely control these robots. The reason for this is that such robots might be seen as symbolizing human persons and because wanting to control such robots might therefore be seen as symbolizing something non-instrumentally bad: persons controlling other persons. A more general statement of the new control problem is to say that it is the problem of describing under what circumstances having complete control over AI technologies is unambiguously good from an ethical point of view. This paper sketches an answer to this by also discussing AI technologies that do not take the form of humanoid robots and that are such that control over them can be conceptualized as a form of extended self-control.
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
Tel.: +1 703 830 6300
Fax: +1 703 830 2300 email@example.com
(Corporate matters and books only) IOS Press c/o Accucoms US, Inc.
For North America Sales and Customer Service
West Point Commons
Lansdale PA 19446
Tel.: +1 866 855 8967
Fax: +1 215 660 5042 firstname.lastname@example.org