ajyl / dpo_toxic
A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.
☆72Updated last month
Alternatives and similar repositories for dpo_toxic:
Users that are interested in dpo_toxic are comparing it to the libraries listed below
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆76Updated last month
- ☆40Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆95Updated 2 months ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆64Updated 7 months ago
- ☆29Updated last month
- General-purpose activation steering library☆66Updated 4 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆54Updated 5 months ago
- [NeurIPS 2024] How do Large Language Models Handle Multilingualism?☆32Updated 5 months ago
- ☆93Updated last year
- ☆29Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆147Updated 11 months ago
- A resource repository for representation engineering in large language models☆121Updated 5 months ago
- ☆58Updated 9 months ago
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆22Updated 6 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆91Updated last year
- ☆21Updated last month
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆82Updated 11 months ago