Image Payload Creating/Injecting tools
-
Updated
Nov 30, 2023 - Perl
Image Payload Creating/Injecting tools
A list of backdoor learning resources
a unique framework for cybersecurity simulation and red teaming operations, windows auditing for newer vulnerabilities, misconfigurations and privilege escalations attacks, replicate the tactics and techniques of an advanced adversary in a network.
The open-sourced Python toolbox for backdoor attacks and defenses.
Hide your payload into .jpg file
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.
TrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classification in deep learning.
Code implementation of the paper "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks", at IEEE Security and Privacy 2019.
A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)
A curated list of papers & resources on backdoor attacks and defenses in deep learning.
An open-source toolkit for textual backdoor attack and defense (NeurIPS 2022 D&B, Spotlight)
This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks](https://openreview.net/pdf?id=9l0K4OM-oXE) in PyTorch.
WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)
Persistent Powershell backdoor tool {😈}
The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack success rate.
BackdoorSim: An Educational into Remote Administration Tools
ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341
[ICCV 2023] Source code for our paper "Rickrolling the Artist: Injecting Invisible Backdoors into Text-Guided Image Generation Models".
Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"
You should never use malware to infiltrate a target system. With the skill of writing and exploiting technical codes, you can do the best ways of penetration. This is done in order to test and increase the security of the open sourcecode.
Add a description, image, and links to the backdoor-attacks topic page so that developers can more easily learn about it.
To associate your repository with the backdoor-attacks topic, visit your repo's landing page and select "manage topics."