Not All Poisons are Created Equal: Robust Training against Data Poisoning (ICML 2022)
☆22Aug 8, 2022Updated 3 years ago
Alternatives and similar repositories for EPIC
Users that are interested in EPIC are comparing it to the libraries listed below
Sorting:
- [NeurIPS'22] Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork. Haotao Wang, Junyuan Hong,…☆15Nov 27, 2023Updated 2 years ago
- Mitigating Spurious Correlations in Multi-modal Models during Fine-tuning (ICML 2023)☆19Dec 15, 2023Updated 2 years ago
- [NeurIPS 2022] "Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets" by Ruisi Cai*, Zhenyu Zh…☆21Oct 1, 2022Updated 3 years ago
- Code for Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks (NeurIPS 2022)☆10Jul 20, 2023Updated 2 years ago
- ☆12Jan 28, 2023Updated 3 years ago
- ☆54Sep 11, 2021Updated 4 years ago
- SpuCo is a Python package developed to further research to address spurious correlations.☆25Jan 16, 2025Updated last year
- [NeurIPS 2024] "Self-Calibrated Tuning of Vision-Language Models for Out-of-Distribution Detection"☆13Oct 28, 2024Updated last year
- Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression☆14Mar 22, 2025Updated 11 months ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 6 years ago
- Bullseye Polytope Clean-Label Poisoning Attack☆15Nov 5, 2020Updated 5 years ago
- Code for paper "Out-of-Domain Robustness via Targeted Augmentations"☆14Feb 25, 2023Updated 3 years ago
- ☆12Jul 17, 2023Updated 2 years ago
- ☆16Jul 17, 2022Updated 3 years ago
- ☆21Sep 16, 2024Updated last year
- Official Implementation for PlugIn Inversion☆16Oct 23, 2021Updated 4 years ago
- Implementation for <Robust Weight Perturbation for Adversarial Training> in IJCAI'22.☆16Jul 1, 2022Updated 3 years ago
- This is the official repository for our NeurIPS'22 paper "Watermarking for Out-of-distribution Detection."☆18Feb 24, 2023Updated 3 years ago
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆21Apr 15, 2024Updated last year
- Adversarially Robust Transfer Learning with LWF loss applied to the deep feature representation (penultimate) layer☆19Feb 9, 2020Updated 6 years ago
- [SatML 2024] Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk☆16Mar 15, 2025Updated 11 months ago
- [ICLR 2022] Reliable Adversarial Distillation with Unreliable Teachers☆22Feb 20, 2022Updated 4 years ago
- Camouflage poisoning via machine unlearning☆19Jul 3, 2025Updated 7 months ago
- WAFFLE: Watermarking in Federated Learning☆23Aug 21, 2023Updated 2 years ago
- ☆19Jun 21, 2021Updated 4 years ago
- [ICML2023] Revisiting Data-Free Knowledge Distillation with Poisoned Teachers☆23Jul 7, 2024Updated last year
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- ☆19Mar 6, 2023Updated 2 years ago
- ☆21Oct 25, 2023Updated 2 years ago
- ☆24Aug 18, 2023Updated 2 years ago
- [NeurIPS 2021] “Improving Contrastive Learning on Imbalanced Data via Open-World Sampling”, Ziyu Jiang, Tianlong Chen, Ting Chen, Zhangya…☆29Dec 30, 2021Updated 4 years ago
- Revisiting Residual Networks for Adversarial Robustness: An Architectural Perspective☆19Jun 7, 2024Updated last year
- ☆23Jun 15, 2022Updated 3 years ago
- This is the source code for HufuNet. Our paper is accepted by the IEEE TDSC.☆27Aug 21, 2023Updated 2 years ago
- ☆25Jun 23, 2021Updated 4 years ago
- ☆29Jan 16, 2023Updated 3 years ago
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆58Mar 20, 2024Updated last year
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆112Aug 19, 2024Updated last year
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Oct 10, 2022Updated 3 years ago