zqypku / mm_poisonView external linksLinks
☆21Oct 25, 2023Updated 2 years ago
Alternatives and similar repositories for mm_poison
Users that are interested in mm_poison are comparing it to the libraries listed below
Sorting:
- ☆19Jun 21, 2021Updated 4 years ago
- ☆83Aug 3, 2021Updated 4 years ago
- [ICML 2023] Protecting Language Generation Models via Invisible Watermarking☆13Sep 8, 2023Updated 2 years ago
- Source Code for the JAIR Paper "Does CLIP Know my Face?" (Demo: https://huggingface.co/spaces/AIML-TUDA/does-clip-know-my-face)☆16Jul 9, 2024Updated last year
- [NeurIPS 2024] "Self-Calibrated Tuning of Vision-Language Models for Out-of-Distribution Detection"☆13Oct 28, 2024Updated last year
- ☆15Apr 7, 2023Updated 2 years ago
- [NeurIPS'22] Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork. Haotao Wang, Junyuan Hong,…☆15Nov 27, 2023Updated 2 years ago
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Mar 13, 2023Updated 2 years ago
- ☆16Jul 17, 2022Updated 3 years ago
- ☆21Sep 16, 2024Updated last year
- [CCS-LAMPS'24] LLM IP Protection Against Model Merging☆16Oct 14, 2024Updated last year
- [BMVC 2023] Backdoor Attack on Hash-based Image Retrieval via Clean-label Data Poisoning☆17Sep 1, 2023Updated 2 years ago
- Implementation for <Robust Weight Perturbation for Adversarial Training> in IJCAI'22.☆16Jul 1, 2022Updated 3 years ago
- Official repo for An Efficient Membership Inference Attack for the Diffusion Model by Proximal Initialization☆16Mar 8, 2024Updated last year
- Camouflage poisoning via machine unlearning☆19Jul 3, 2025Updated 7 months ago
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- This is a python script to generate nice bibtex file for latex.☆18Mar 1, 2020Updated 5 years ago
- ICLR 2023 paper "Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness" by Yuancheng Xu, Yanchao Sun, Micah Gold…☆25May 2, 2023Updated 2 years ago
- [NeurIPS 2022] "Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets" by Ruisi Cai*, Zhenyu Zh…☆21Oct 1, 2022Updated 3 years ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆23Mar 23, 2024Updated last year
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Feb 18, 2025Updated 11 months ago
- Not All Poisons are Created Equal: Robust Training against Data Poisoning (ICML 2022)☆22Aug 8, 2022Updated 3 years ago
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆20Apr 27, 2023Updated 2 years ago
- ☆54Sep 11, 2021Updated 4 years ago
- Code Repo for the NeurIPS 2023 paper "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models"☆27Sep 18, 2025Updated 5 months ago
- ☆58May 30, 2024Updated last year
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆18May 13, 2019Updated 6 years ago
- MACER: MAximizing CErtified Radius (ICLR 2020)☆31Jan 5, 2020Updated 6 years ago
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…