TY - GEN
T1 - Enhancing Federated Learning Robustness in Non-IID Data Environments via MMD-Based Distribution Alignment
AU - Ma, Xiao
AU - Shen, Hong
AU - Lyu, Wenqi
AU - Ke, Wei
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.
PY - 2025
Y1 - 2025
N2 - Federated learning(FL), due to its distributed nature, is highly susceptible to malicious attacks. Although various Byzantine-robust FL methods exist, they often fail to maintain robustness in practical scenarios due to the non-independent and identically distributed (Non-IID) nature of client data. Moreover, existing FL methods often suffer from weight divergence caused by heterogeneous data distributions across clients. To address these issues, we propose a novel federated learning framework that aligns local data distributions across different clients to enhance robustness for Non-IID data in adversarial environments. It contains a feature transformation layer that incorporates Maximum Mean Discrepancy (MMD) as a regularization term to avoid weight divergence through aligning local and global data distributions without sharing raw data. Our approach dynamically updates the statistical information of both local and global data, including the mean and variance, ensuring that local models are closely aligned with the global model throughout training. Experimental results on MNIST and CIFAR-10 datasets demonstrate that our proposed framework significantly improves robustness both in the absence of attacks and against untargeted attacks such as sign-flipping and additive noise.
AB - Federated learning(FL), due to its distributed nature, is highly susceptible to malicious attacks. Although various Byzantine-robust FL methods exist, they often fail to maintain robustness in practical scenarios due to the non-independent and identically distributed (Non-IID) nature of client data. Moreover, existing FL methods often suffer from weight divergence caused by heterogeneous data distributions across clients. To address these issues, we propose a novel federated learning framework that aligns local data distributions across different clients to enhance robustness for Non-IID data in adversarial environments. It contains a feature transformation layer that incorporates Maximum Mean Discrepancy (MMD) as a regularization term to avoid weight divergence through aligning local and global data distributions without sharing raw data. Our approach dynamically updates the statistical information of both local and global data, including the mean and variance, ensuring that local models are closely aligned with the global model throughout training. Experimental results on MNIST and CIFAR-10 datasets demonstrate that our proposed framework significantly improves robustness both in the absence of attacks and against untargeted attacks such as sign-flipping and additive noise.
KW - Federated learning
KW - Maximum Mean Discrepancy
KW - Non-IID
KW - Robustness
UR - http://www.scopus.com/inward/record.url?scp=105002726715&partnerID=8YFLogxK
U2 - 10.1007/978-981-96-4207-6_26
DO - 10.1007/978-981-96-4207-6_26
M3 - Conference contribution
AN - SCOPUS:105002726715
SN - 9789819642069
T3 - Lecture Notes in Computer Science
SP - 280
EP - 291
BT - Parallel and Distributed Computing, Applications and Technologies - 25th International Conference, PDCAT 2024, Proceedings
A2 - Li, Yupeng
A2 - Xu, Jianliang
A2 - Zhang, Yong
PB - Springer Science and Business Media Deutschland GmbH
T2 - 25th International Conference on Parallel and Distributed Computing, Applications and Technologies, PDCAT 2024
Y2 - 13 December 2024 through 15 December 2024
ER -