As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Adversarial machine learning (AML), by designing attacks that intentionally break or misuse state-of-the-art machine learning models, has become the most prominent scientific field to explore the security aspects of Artificial Intelligence. A whole range of vulnerabilities, previously irrelevant in traditional ICT, have effectively emerged in these studies. In the light of upcoming legislations mandating security requirements for AI products and services, there is a need to understand how AML techniques connect with the broader field of cybersecurity, and how to articulate more tightly threat models with realistic cybersecurity procedures.
This article aims to contribute to closing the gap between AML and cybersecurity by proposing an approach to study the feasibility of an attack in a cybersecurity risk assessment framework, illustrated with a specific use case of an evasion attack designed to fool traffic sign recognition systems in the physical world. The importance of considering the feasibility of carrying out such attacks under real conditions is emphasized through the analysis of two factors: the reproducibility of the attack according to a published description or existing code, and the applicability of the attack by a malicious actor operating in a real-world environment.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.