As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
New technologies which offer potential strategic or military advantages to state principals can disrupt previously stable power distributions and global governance arrangements. Artificial intelligence is one such critical technology, with many anticipating ‘arms races’ amongst major powers to weaponize AI for widespread use on the battlefield. This raises the question of if, or how, one may design stable and effective global governance arrangements to contain or channel the militarization of AI. Two perceptions implicit in many debates are that (i) “AI arms races are inevitable,” and (ii) “states will be deaf to calls for governance where that conflicts with perceived interests.” Drawing a parallel with historical experiences in nuclear arms control, I argue that this history suggests that (1) horizontal proliferation and arms races are not inevitable, but may be slowed, channeled or even averted; and that (2) small communities of experts, appropriately mobilized, can catalyze arms control and curb vertical proliferation of AI weapons.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.