VITA-Group / Shake-to-Leak
☆14Updated last week
Alternatives and similar repositories for Shake-to-Leak:
Users that are interested in Shake-to-Leak are comparing it to the libraries listed below
- Official implementation of NeurIPS'24 paper "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Model…☆39Updated 4 months ago
- ☆19Updated last year
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆44Updated 2 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆27Updated 5 months ago
- ☆29Updated 2 months ago
- ☆40Updated last year
- ECCV2024: Adversarial Prompt Tuning for Vision-Language Models☆24Updated 4 months ago
- [ICLR 2022 official code] Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?☆29Updated 3 years ago
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models☆46Updated 3 months ago
- [CVPR'24 Oral] Metacloak: Preventing Unauthorized Subject-driven Text-to-image Diffusion-based Synthesis via Meta-learning☆22Updated 4 months ago
- Official repo for An Efficient Membership Inference Attack for the Diffusion Model by Proximal Initialization☆13Updated last year
- [NeurIPS 2021] “When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?”☆48Updated 3 years ago
- Official implementation of "When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture" published at Neur…☆33Updated 6 months ago
- The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Uns…☆72Updated 3 weeks ago
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆51Updated 11 months ago
- ☆19Updated 3 months ago
- [CVPR 2024] This repository includes the official implementation our paper "Revisiting Adversarial Training at Scale"☆19Updated 11 months ago
- ☆60Updated 5 months ago
- ☆31Updated 8 months ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆22Updated last year
- official implementation of Towards Robust Model Watermark via Reducing Parametric Vulnerability☆13Updated 9 months ago
- ☆53Updated last year
- This repository is the official implementation of Dataset Condensation with Contrastive Signals (DCC), accepted at ICML 2022.☆20Updated 2 years ago
- ☆13Updated 2 years ago
- [CVPR 2024] Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers☆17Updated 5 months ago
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Updated 6 months ago
- [ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal…☆45Updated 9 months ago
- ☆33Updated 9 months ago
- OODRobustBench: a Benchmark and Large-Scale Analysis of Adversarial Robustness under Distribution Shift. ICML 2024 and ICLRW-DMLR 2024☆20Updated 8 months ago
- ☆40Updated 3 months ago