Carol-gutianle / Awesome-llm-unlearningLinks
☆12Updated 11 months ago
Alternatives and similar repositories for Awesome-llm-unlearning
Users that are interested in Awesome-llm-unlearning are comparing it to the libraries listed below
Sorting:
- "In-Context Unlearning: Language Models as Few Shot Unlearners". Martin Pawelczyk, Seth Neel* and Himabindu Lakkaraju*; ICML 2024.☆26Updated last year
- [ICLR 2024] Towards Elminating Hard Label Constraints in Gradient Inverision Attacks☆13Updated last year
- This repo is for the safety topic, including attacks, defenses and studies related to reasoning and RL☆18Updated last week
- Github repo for NeurIPS 2024 paper "Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models"☆15Updated 8 months ago
- ☆26Updated 3 weeks ago
- ☆34Updated 10 months ago
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆43Updated 6 months ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆70Updated last year
- ☆47Updated 9 months ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆35Updated last year
- [NeurIPS 2024] Fight Back Against Jailbreaking via Prompt Adversarial Tuning☆10Updated 7 months ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆17Updated 9 months ago
- ☆12Updated last year
- Accepted by ECCV 2024☆130Updated 7 months ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆20Updated last year
- ☆20Updated 2 months ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆49Updated 4 months ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆22Updated last year
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆26Updated 6 months ago
- ☆21Updated 2 months ago
- [CCS-LAMPS'24] LLM IP Protection Against Model Merging☆15Updated 7 months ago
- ☆22Updated 9 months ago
- Code and data for paper "Can LLM Watermarks Robustly Prevent Unauthorized Knowledge Distillation?". (ACL 2025 Main)☆14Updated 3 months ago
- A Task of Fictitious Unlearning for VLMs☆17Updated 2 months ago
- [ICLR 2025] BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacks☆17Updated last month
- ☆42Updated last year
- A curated list of trustworthy Generative AI papers. Daily updating...☆73Updated 9 months ago
- ☆54Updated 2 weeks ago
- Watermarking LLM papers up-to-date☆15Updated last year
- A package that achieves 95%+ transfer attack success rate against GPT-4☆20Updated 7 months ago