-
China university of Geosciences(wuhan)
- wuhan
- https://www.cug.edu.cn/
Highlights
- Pro
Lists (7)
Sort Name ascending (A-Z)
Stars
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
The core code for our paper "Beyond Traditional Threats: A Persistent Backdoor Attack on Federated Learning".
The official implementation of paper "Towards Imperceptible Backdoor Attack in Self-supervised Learning"
Implementation of BELT: Old-School Backdoor Attacks can Evade the State-of-the-Art Defense with Backdoor Exclusivity Lifting
Code for paper 'FIBA: Frequency-Injection based Backdoor Attack in Medical Image Analysis'
ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Repo. explores how we can use these artifacts to develop strong…
IBA: Towards Irreversible Backdoor Attacks in Federated Learning (Poster at NeurIPS 2023)
A markdown editor that you can deploy on your own servers to achieve cloud storage and device synchronization(支持私有部署的云端存储双链笔记软件)
😱 从源码层面,剖析挖掘互联网行业主流技术的底层实现原理,为广大开发者 “提升技术深度” 提供便利。目前开放 Spring 全家桶,Mybatis、Netty、Dubbo 框架,及 Redis、Tomcat 中间件等
Leading free and open-source face recognition system
A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)
[NeurIPS2023] Official code of "Understanding Contrastive Learning via Distributionally Robust Optimization"
A GUI client for Windows, Linux and macOS, support Xray and sing-box and others
科学上网,ss, ssr, v2ray, trojan, clash, clashr,翻墙机场推荐
⭐️⭐️⭐️微服务商城系统 springcloud微服务商城 小程序商城
A backdoor defense for federated learning via isolated subspace training (NeurIPS2023)
[CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset
《代码随想录》LeetCode 刷题攻略:200道经典题目刷题顺序,共60w字的详细图解,视频难点剖析,50余张思维导图,支持C++,Java,Python,Go,JavaScript等多语言版本,从此算法学习不再迷茫!🔥🔥 来看看,你会发现相见恨晚!🚀
分享 GitHub 上有趣、入门级的开源项目。Share interesting, entry-level open source projects on GitHub.
The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".
The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% att…
[ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (https://proceedings.mlr.press/v202/dai23a)"
[CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consistency".