jaechan-repo / muse_bench
☆12Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for muse_bench
- Official code for the paper: Evaluating Copyright Takedown Methods for Language Models☆15Updated 3 months ago
- ☆34Updated 3 months ago
- Official code implementation of SKU, Accepted by ACL 2024 Findings☆11Updated 5 months ago
- "Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning" by Chongyu Fan*, Jiancheng Liu*, Licong Lin*, Jingh…☆17Updated last week
- ☆15Updated 3 months ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆69Updated 8 months ago
- Implementation of paper 'Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning Framework from Logit Difference' [NeurIPS'24…☆13Updated 4 months ago
- Röttger et al. (2023): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆61Updated 10 months ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆70Updated 2 months ago
- ☆38Updated last year
- ☆49Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆83Updated 5 months ago
- ☆19Updated last month
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆52Updated last week
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆32Updated 3 weeks ago
- ☆33Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆59Updated 8 months ago
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆21Updated 4 months ago
- Multilingual safety benchmark for Large Language Models☆22Updated 2 months ago
- Restore safety in fine-tuned language models through task arithmetic☆26Updated 7 months ago
- Official Repository for Dataset Inference for LLMs☆23Updated 3 months ago
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆70Updated 6 months ago
- [ATTRIB @ NeurIPS 2024] When Attention Sink Emerges in Language Models: An Empirical View☆27Updated 3 weeks ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆58Updated last month
- ☆26Updated 2 weeks ago
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆68Updated 3 weeks ago
- ☆26Updated 6 months ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆54Updated 10 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆26Updated 4 months ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆57Updated 8 months ago