As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Backdoor attacks have become a significant threat to deep neural networks (DNNs), whereby poisoned models perform well on benign samples but produce incorrect outputs when given specific inputs with a trigger. These attacks are usually implemented through data poisoning by injecting poisoned samples (samples patched with a trigger and mislabelled to the target label) into the dataset, and the models trained with that dataset will be infected with the backdoor. However, most current backdoor attacks lack stealthiness and robustness because of the fixed trigger patterns and mislabelling, which humans or some backdoor defense approach can easily detect. To address this issue, we propose a frequency-domain-based backdoor attack method that implements backdoor implantation without mislabeling the poisoned samples or accessing the training process. We evaluated our approach on four benchmark datasets and two popular scenarios: no-label self-supervised and clean-label supervised learning. The experimental results demonstrate that our approach achieved a high attack success rate (above 90%) on all tasks without significant performance degradation on main tasks and robust against mainstream defense approaches.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.