Stars
reproduce the FLTrust model based on the paper "FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping"
The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients"
DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)
Gitbook Address: https://app.gitbook.com/@nlpgroup/s/nlpnote/
A curated list of papers & resources on backdoor attacks and defenses in deep learning.