KNN Defense Against Clean Label Poisoning Attacks
☆13Sep 24, 2021Updated 4 years ago
Alternatives and similar repositories for DeepKNNDefense
Users that are interested in DeepKNNDefense are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Code for the paper "Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks"☆13Aug 22, 2022Updated 3 years ago
- ☆17Jun 25, 2024Updated last year
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 6 years ago
- The implementatioin code of paper: “A Practical Clean-Label Backdoor Attack with Limited Information in Vertical Federated Learning”☆11Jul 1, 2023Updated 2 years ago
- ConvexPolytopePosioning☆37Jan 10, 2020Updated 6 years ago
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago
- Craft poisoned data using MetaPoison☆54Apr 5, 2021Updated 5 years ago
- TextGuard: Provable Defense against Backdoor Attacks on Text Classification☆15Nov 7, 2023Updated 2 years ago
- Adversarial attacks and defenses against federated learning.☆20May 24, 2023Updated 2 years ago
- Code for "Label-Consistent Backdoor Attacks"☆57Nov 22, 2020Updated 5 years ago
- ☆12Dec 9, 2020Updated 5 years ago
- [Findings of EMNLP 2022] Expose Backdoors on the Way: A Feature-Based Efficient Defense against Textual Backdoor Attacks☆13Feb 26, 2023Updated 3 years ago
- CoPur: Certifiably Robust Collaborative Inference via Feature Purification (NeurIPS 2022)☆11Dec 7, 2022Updated 3 years ago
- Anti-Backdoor learning (NeurIPS 2021)☆84Jul 20, 2023Updated 2 years ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- The code of the attack scheme in the paper "Backdoor Attack Against Split Neural Network-Based Vertical Federated Learning"☆21Oct 13, 2023Updated 2 years ago
- Official implementation of the EMNLP 2021 paper "ONION: A Simple and Effective Defense Against Textual Backdoor Attacks"☆37Nov 3, 2021Updated 4 years ago
- Code & supplementary material of the paper Label Inference Attacks Against Federated Learning on Usenix Security 2022.☆87Jun 27, 2023Updated 2 years ago
- [NeurIPS'22] Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork. Haotao Wang, Junyuan Hong,…☆15Nov 27, 2023Updated 2 years ago
- 天池比赛-铝型材表面瑕疵识别的实现;Pytorch;finetune renet;☆15Oct 27, 2018Updated 7 years ago
- ☆19Jun 21, 2021Updated 4 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆36Oct 29, 2025Updated 5 months ago
- ☆53Jan 7, 2022Updated 4 years ago
- Code for Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks (NeurIPS 2022)☆10Jul 20, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆12Jan 28, 2023Updated 3 years ago
- Implementation of BapFL: You can Backdoor Attack Personalized Federated Learning☆15Sep 18, 2023Updated 2 years ago
- Code for "Neuron Shapley: Discovering the Responsible Neurons"☆27May 1, 2024Updated last year
- Official repository of the paper "Dynamic Defense Against Byzantine Poisoning Attacks in Federated Learning".☆12Mar 28, 2022Updated 4 years ago
- ☆19Jan 8, 2021Updated 5 years ago
- [ICLR 2022] "Sparsity Winning Twice: Better Robust Generalization from More Efficient Training" by Tianlong Chen*, Zhenyu Zhang*, Pengjun…☆40Mar 20, 2022Updated 4 years ago
- A set of cmake scripts to more easily build opencl based programs☆10Jun 28, 2018Updated 7 years ago
- ☆21Oct 25, 2021Updated 4 years ago
- Tool for roof defect recognition☆13May 14, 2020Updated 5 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- FedDefender is a novel defense mechanism designed to safeguard Federated Learning from the poisoning attacks (i.e., backdoor attacks).☆15Jul 6, 2024Updated last year
- Automated neural architecture search algorithms implemented in PyTorch and Autogluon toolkit.☆12Apr 17, 2020Updated 5 years ago
- ☆19Apr 12, 2023Updated 3 years ago
- Sewer-Pipeline-Defect-Identification☆16May 22, 2020Updated 5 years ago
- code for TPDS paper "Towards Fair and Privacy-Preserving Federated Deep Models"☆32Jun 16, 2022Updated 3 years ago
- [Pattern Recognition] Official implementation of the paper "CANet: Contextual Information and Spatial Attention Based Network for Detecti…☆20Jul 13, 2024Updated last year
- this is a repo for the demo on backdoor attacks on StyleGAN and WaveGAN☆19Aug 4, 2021Updated 4 years ago