caxLee / Awesome-Generative-Model-Unlearning-SurveyLinks
☆16Updated 3 weeks ago
Alternatives and similar repositories for Awesome-Generative-Model-Unlearning-Survey
Users that are interested in Awesome-Generative-Model-Unlearning-Survey are comparing it to the libraries listed below
Sorting:
- Devil-Whisper-Attack☆36Updated 5 months ago
- Pytorch implementation of Backdoor Attack against Speaker Verification☆26Updated 2 years ago
- KENKU: Towards Efficient and Stealthy Black-box Adversarial Attacks against ASR Systems☆17Updated last year
- Repo for papers to read on adversarial attack and defense techniques in the audio domain.☆41Updated 4 years ago
- ☆30Updated last year
- Code of paper "AdvReverb: AdvReverb: Rethinking the Stealthiness of Audio Adversarial Examples to Human Perception"☆17Updated last year
- Robust Audio Adversarial Example for a Physical Attack☆63Updated 5 years ago
- UP-TO-DATE LLM Watermark paper. 🔥🔥🔥☆354Updated 9 months ago
- A Survey on Jailbreak Attacks and Defenses against Multimodal Generative Models☆225Updated 2 weeks ago
- ☆31Updated 5 months ago
- Targeted Adversarial Examples for Black Box Audio Systems☆71Updated 5 years ago
- ☆60Updated 10 months ago
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆44Updated 10 months ago
- The code for paper "The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation (RAG)", exploring the privacy risk o…☆55Updated 7 months ago
- ☆12Updated last year
- Code for watermarking language models☆82Updated last year
- Datasets of audio adversarial examples for deep speech recognition systems and Python code of a detection system☆10Updated 2 years ago
- ☆223Updated last month
- Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment☆23Updated 3 months ago
- This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.☆63Updated 9 months ago
- ☆36Updated 6 years ago
- Code for Backdoor Attacks Against Dataset Distillation☆35Updated 2 years ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Updated 7 months ago
- Accepted by IJCAI-24 Survey Track☆214Updated last year
- ☆19Updated 8 months ago
- Prepend universal audio attack segment to mute Whisper☆28Updated 7 months ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Updated 2 months ago
- ☆44Updated 3 years ago
- Repository for Towards Codable Watermarking for Large Language Models☆38Updated 2 years ago
- ☆28Updated 3 weeks ago