BatsResearch / cross-lingual-detox
Code for "Preference Tuning For Toxicity Mitigation Generalizes Across Languages"
☆13Updated last month
Related projects: ⓘ
- Röttger et al. (2023): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆55Updated 8 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆46Updated last month
- ☆47Updated last year
- ☆23Updated 4 months ago
- Official code for the paper: Evaluating Copyright Takedown Methods for Language Models☆14Updated 2 months ago
- AI Logging for Interpretability and Explainability🔬☆74Updated 3 months ago
- ☆56Updated last year
- Official Repository for Dataset Inference for LLMs☆21Updated last month
- ☆22Updated 2 months ago
- Restore safety in fine-tuned language models through task arithmetic☆25Updated 5 months ago
- Landing Page for TOFU☆79Updated 3 months ago
- ☆24Updated 4 months ago
- Min-K%++: Improved baseline for detecting pre-training data of LLMs https://arxiv.org/abs/2404.02936☆25Updated 3 months ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆64Updated 6 months ago
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆12Updated 4 months ago
- ☆69Updated 10 months ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆79Updated last year
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆61Updated 4 months ago
- ☆32Updated 10 months ago
- ☆38Updated 8 months ago
- ☆44Updated 2 weeks ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆48Updated 5 months ago
- Directional Preference Alignment☆44Updated 3 months ago
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆27Updated 4 months ago
- Code for "Universal Adversarial Triggers Are Not Universal."☆15Updated 4 months ago
- ☆61Updated 2 years ago
- ☆43Updated 7 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆22Updated 2 months ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆64Updated 2 weeks ago
- ☆37Updated 10 months ago