As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
The distributed nature of federated learning makes it highly susceptible to backdoor attacks, which aim to induce the model to produce incorrect results when specific data is input. Existing defense methods based on distance calculations mostly use one single distance, which can be easily circumvented by backdoor attacks. Therefore, we conducted research on the combination of multiple distances. Based on this, we propose a backdoor defense algorithm based on multi metrics, which evaluates models through distance calculation and normalization to select benign models for aggregation. To verify the effectiveness of our method, we conducted experiments on three datasets, and the results prove that our method can provide good defense without reducing the model’s main task accuracy.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.