TY - JOUR
T1 - PFL-ALP
T2 - Personalized Federated Learning Against Backdoor Attacks via Attention-Based Local Purification
AU - Jiang, Yifeng
AU - Yuan, Xiaochen
AU - Zhang, Weiwen
AU - Ke, Wei
AU - Lam, Chan Tong
AU - Im, Sio Kei
N1 - Publisher Copyright:
© 2005-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Federated learning (FL) enables collaborative model training with local data privacy preserving, but is vulnerable to backdoor attacks from malicious clients. These attacks can manipulate the global model to produce malicious output when encountering specific triggers. Existing defenses, categorized as server-side and client-side approaches, have limitations such as reliance on auxiliary data availability, susceptibility to inference attacks, and instability under non-independent and identically distributed (Non-IID) data. In response to these challenges, we propose a Personalized Federated Learning via Attention-based Local Purification (PFL-ALP) algorithm, a hybrid defense mechanism integrating server-side dynamic clustering and client-side purification enhanced with personalized model knowledge. This approach effectively mitigates bias introduced by Non-IID data on the server side and further purifies the backdoored model on the client side. Specifically, we employ neural attention distillation (NAD) for model purification and enhance it with personalized model knowledge, extending the effectiveness of NAD in Non-IID FL settings. This design makes PFL-ALP compatible with privacy protocols to mitigate inference attacks. Moreover, we establish a convergence guarantee for PFL-ALP and experimentally validate its superior performance in defending against various backdoor attacks compared to multiple state-of-the-art (SOTA) defenses across three datasets. The results show that even with malicious rates ranging from 30% to 90%, PFL-ALP can reduce the attack success rate by more than 69.4 percentage points, with the reduction in main task accuracy less than 12.4 percentage points.
AB - Federated learning (FL) enables collaborative model training with local data privacy preserving, but is vulnerable to backdoor attacks from malicious clients. These attacks can manipulate the global model to produce malicious output when encountering specific triggers. Existing defenses, categorized as server-side and client-side approaches, have limitations such as reliance on auxiliary data availability, susceptibility to inference attacks, and instability under non-independent and identically distributed (Non-IID) data. In response to these challenges, we propose a Personalized Federated Learning via Attention-based Local Purification (PFL-ALP) algorithm, a hybrid defense mechanism integrating server-side dynamic clustering and client-side purification enhanced with personalized model knowledge. This approach effectively mitigates bias introduced by Non-IID data on the server side and further purifies the backdoored model on the client side. Specifically, we employ neural attention distillation (NAD) for model purification and enhance it with personalized model knowledge, extending the effectiveness of NAD in Non-IID FL settings. This design makes PFL-ALP compatible with privacy protocols to mitigate inference attacks. Moreover, we establish a convergence guarantee for PFL-ALP and experimentally validate its superior performance in defending against various backdoor attacks compared to multiple state-of-the-art (SOTA) defenses across three datasets. The results show that even with malicious rates ranging from 30% to 90%, PFL-ALP can reduce the attack success rate by more than 69.4 percentage points, with the reduction in main task accuracy less than 12.4 percentage points.
KW - Personalized federated learning
KW - attention maps
KW - backdoor attacks
KW - dynamic clustering
UR - https://www.scopus.com/pages/publications/105023898052
U2 - 10.1109/TIFS.2025.3639936
DO - 10.1109/TIFS.2025.3639936
M3 - Article
AN - SCOPUS:105023898052
SN - 1556-6013
VL - 20
SP - 12995
EP - 13010
JO - IEEE Transactions on Information Forensics and Security
JF - IEEE Transactions on Information Forensics and Security
ER -