Stars
FLTracer: Accurate Poisoning Attack Provenance in Federated Learning
The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".
Code for ESORICS 2022 paper: Long-Short History of Gradients is All You Need: Detecting Malicious and Unreliable Clients in Federated Learning
[Usenix Security 2024] Official code implementation of "BackdoorIndicator: Leveraging OOD Data for Proactive Backdoor Detection in Federated Learning" (https://www.usenix.org/conference/usenixsecur…
[ACSAC '24] FedCAP: Robust Federated Learning via Customized Aggregation and Personalization
Code to reproduce the experiments of "On the Byzantine-Resilience of Distillation-Based Federated Learning"
Backdoor detection in Federated learning with similarity measurement
Offical Implementation of the paper Suppressing Poisoning Attacks on Federated Learning for Medical Imaging accepted in MICCAI 2022
The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients"
reproduce the FLTrust model based on the paper "FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping"
[WACV 2025] The official Pytorch implementation of LASA
PyTorch implementation of Security-Preserving Federated Learning via Byzantine-Sensitive Triplet Distance
37 traditional FL (tFL) or personalized FL (pFL) algorithms, 3 scenarios, and 24 datasets. www.pfllib.com/
Federated learning via stochastic gradient descent
The Code for "Federated Recommender with Additive Personalization"
[ICLR 2021] HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients
Official code for "Federated Learning under Heterogeneous and Correlated Client Availability" (INFOCOM'23)
[ICLR'21] FedBN: Federated Learning on Non-IID Features via Local Batch Normalization
nips23-Dynamic Personalized Federated Learning with Adaptive Differential Privacy
Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"
[ICLR2024] "Backdoor Federated Learning by Poisoning Backdoor-Critical Layers"
[CVPRW'22] A privacy attack that exploits Adversarial Training models to compromise the privacy of Federated Learning systems.
GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning, as well as corresponding mitigation strategies.
A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".
[NeurIPS 2023] "FedFed: Feature Distillation against Data Heterogeneity in Federated Learning"