OPTML-Group / Unlearn-Simple
"Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning" by Chongyu Fan*, Jiancheng Liu*, Licong Lin*, Jinghan Jia, Ruiqi Zhang, Song Mei, Sijia Liu
☆17Updated last week
Related projects ⓘ
Alternatives and complementary repositories for Unlearn-Simple
- ☆34Updated 3 months ago
- ☆12Updated 3 months ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆70Updated 2 months ago
- ☆15Updated 3 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆83Updated 5 months ago
- Official code for the paper: Evaluating Copyright Takedown Methods for Language Models☆15Updated 3 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆58Updated last month
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆70Updated 6 months ago
- Official code implementation of SKU, Accepted by ACL 2024 Findings☆11Updated 5 months ago
- A survey on harmful fine-tuning attack for large language model☆70Updated this week
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆17Updated 3 weeks ago
- ☆35Updated last year
- Official Code for Paper: Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆58Updated last month
- ☆44Updated 10 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆59Updated 8 months ago
- Official repo for EMNLP'24 paper "SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning"☆13Updated last month
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆45Updated last month
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆52Updated last week
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆58Updated 11 months ago
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆13Updated 4 months ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆102Updated 2 months ago
- Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models☆23Updated last year
- ☆31Updated 5 months ago
- LLM Unlearning☆123Updated last year
- ☆33Updated last year
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆26Updated 4 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆62Updated 2 years ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆96Updated 7 months ago
- ☆19Updated last month
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆69Updated 8 months ago