git clone https://github.com/lishenghui/blades
cd blades
pip install -v -e .
# "-v" means verbose, or more output
# "-e" means installing a project in editable mode,
# thus any local modifications made to the code will take effect without reinstallation.
cd blades/blades
python train.py file ./tuned_examples/fedsgd_cnn_fashion_mnist.yaml
Blades internally calls ray.tune; therefore, the experimental results are output to its default directory: ~/ray_results
.


To run blades on a cluster, you only need to deploy Ray cluster
according to the official guide.
In detail, the following strategies are currently implemented:
Strategy | Description | Sourse |
---|---|---|
Noise | Put random noise to the updates. | Sourse |
Labelflipping | Fang et al. Local Model Poisoning Attacks to Byzantine-Robust Federated Learning, USENIX Security' 20 | Sourse |
Signflipping | Li et al. RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets, AAAI' 19 | Sourse |
ALIE | Baruch et al. A little is enough: Circumventing defenses for distributed learning NeurIPS' 19 | Sourse |
IPM | Xie et al. Fall of empires: Breaking byzantine- tolerant sgd by inner product manipulation, UAI' 20 | Sourse |
Strategy | Description | Sourse |
---|---|---|
DistanceMaximization | Shejwalkar et al. Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning, NDSS' 21 | Sourse |


Please cite our paper (and the respective papers of the methods used) if you use this code in your own work:
@article{li2023blades, title={Blades: A Unified Benchmark Suite for Byzantine Attacks and Defenses in Federated Learning}, author= {Li, Shenghui and Ju, Li and Zhang, Tianru and Ngai, Edith and Voigt, Thiemo}, journal={arXiv preprint arXiv:2206.05359}, year={2023} }