Skip to content

Library for Byzantine attacks and aefences in federated learning.

Notifications You must be signed in to change notification settings

CRYPTO-KU/FL-Byzantine-Library

Repository files navigation

Byzantine attacks and defenses in federated learning Library

This library contains the implementation of the Byzantine attacks and defenses in federated learning.

Aggregators:

  • Aggregators can be extended by adding the aggregator in the aggregators folder.

  • Bulyan - The Hidden Vulnerability of Distributed Learning in Byzantium [ICML 2018]

  • Centered Clipping - Learning from history for Byzantine robust optimization [ICML 2021]

  • Centered Median - Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates [ICML 2018]

  • Krum - Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent [Neurips 2017]

  • Trimmed Mean - Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates [ICML 2018]

  • SignSGD - signSGD with Majority Vote is Communication Efficient and Fault Tolerant [ICLR 2019]

  • RFA - Robust Aggregation for Federated Learning [IEEE 2022 TSP]

  • Sequantial Centered Clipping - Byzantines Can Also Learn From History: Fall of Centered Clipping in Federated Learning [IEEE 2024 TIFS]

  • FL-Trust - FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping [NDSS 2021]

  • GAS (Krum and Bulyan) - Byzantine-robust learning on heterogeneous data via gradient splitting} [ICML 2023]

  • FedAVG - [AISTATS 2016]

Byzantine Attacks:

  • Attacks can be extended by adding the attack in the attacks folder.

  • Label-Flip - Poisoning Attacks against Support Vector Machines [ICML 2012]

  • Bit-Flip -

  • Gaussian noise -

  • Untargeted C&W () - Towards evaluating the robustness of neural networks [IEEE S&P 2017]

  • Little is enough (ALIE) - A Little Is Enough: Circumventing Defenses For Distributed Learning [Neurips]

  • Inner product Manipulation (IPM) - Fall of Empires: Breaking Byzantine-tolerant SGD by Inner Product Manipulation [UAI 2019]

  • Relocated orthogonal perturbation (ROP) - Byzantines Can Also Learn From History: Fall of Centered Clipping in Federated Learning [IEEE 2024 TIFS]

  • Min-sum - Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning [[NDSS 2022]] (https://par.nsf.gov/servlets/purl/10286354)

  • Min-max - Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning [[NDSS 2022]] (https://par.nsf.gov/servlets/purl/10286354)

  • Sparse - Aggressive or Imperceptible, or Both: Network Pruning Assisted Hybrid Byzantines in Federated Learning

  • Sparse-Optimized - Aggressive or Imperceptible, or Both: Network Pruning Assisted Hybrid Byzantines in Federated Learning

Datasets:

  • MNIST
  • CIFAR-10
  • CIFAR-100
  • Fashion-MNIST
  • EMNIST
  • SVHN
  • Tiny-ImageNet

Datasets can be extended by adding the dataset in the datasets folder. Any labeled vision classification dataset in https://pytorch.org/vision/main/datasets.html can be used.

Available data distributions:

  • IID
  • Non-IID:
    • Dirichlet lower the alpha, more non-IID the data becomes. value "1" generally realistic for the real FL scenarios.
    • Sort-and-Partition Distributes only a few selected classes to each client.

Models:

  • Models can be extended by adding the model in the models folder and by modifying the 'nn_classes' accordingly.
  • Different Norms and initialization functions are available in 'nn_classes.

Available models:

  • MLP Different sizes of MLP models are available for grayscale images.
  • CNN (various sizes) Different CNN models are available for RGB and grayscale images respectively
  • ResNet RGB datasets only. Various depts and sizes are available (8-20-9-18).
  • VGG RGB datasets only. Various depts and sizes are available.
  • MobileNet RGB datasets only.

Future models:

  • Visual Transformers (ViT , DeiT, Swin, Twin, etc.)

Installation

  1. Install Python 3.8. For convenience, execute the following command.
pip install -r requirements.txt

Citation

If you find this repo useful, please cite our papers.

@ARTICLE{ROP,
  author={Ozfatura, Kerem and Ozfatura, Emre and Kupcu, Alptekin and Gunduz, Deniz},
  journal={IEEE Transactions on Information Forensics and Security}, 
  title={Byzantines Can Also Learn From History: Fall of Centered Clipping in Federated Learning}, 
  year={2024},
  volume={19},
  number={},
  pages={2010-2022},
  doi={10.1109/TIFS.2023.3345171}}
@misc{sparseATK,
      title={Aggressive or Imperceptible, or Both: Network Pruning Assisted Hybrid Byzantines in Federated Learning}, 
      author={Emre Ozfatura and Kerem Ozfatura and Alptekin Kupcu and Deniz Gunduz},
      year={2024},
      eprint={2404.06230},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Contact

If you have any questions or suggestions, feel free to contact:

About

Library for Byzantine attacks and aefences in federated learning.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages