As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
In federated learning systems, malicious attackers can manipulate training datasets by injecting backdoor triggers to achieve data poisoning. Addressing this vulnerability, this paper proposes a new defense method, the adaptive differential privacy stochastic gradient descent (ADP-SGD) algorithm, which increases the Euclidean distance between malicious and benign updates to enhance aggregation efficiency. Experiments were conducted on typical datasets, and the results show that combining ADP-SGD with the m-Krum aggregation algorithm can effectively defend against backdoor attacks in complex attack environments. While ensuring the accuracy of the main task, it significantly reduces the success rate of backdoor attacks, especially under the condition of unknown proportions of malicious participants, proving to be effective and robust.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.