ajyl / dpo_toxic
A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.
☆71Updated last month
Alternatives and similar repositories for dpo_toxic:
Users that are interested in dpo_toxic are comparing it to the libraries listed below
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆74Updated 2 weeks ago
- ☆39Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆96Updated last month
- ☆23Updated last month
- Steering Llama 2 with Contrastive Activation Addition☆137Updated 10 months ago
- A resource repository for representation engineering in large language models☆116Updated 5 months ago
- ☆29Updated 11 months ago
- ☆93Updated last year
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆86Updated 9 months ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆63Updated 6 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆91Updated last year
- [NeurIPS 2024] How do Large Language Models Handle Multilingualism?☆31Updated 5 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆53Updated 4 months ago
- ☆54Updated 8 months ago
- Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization☆21Updated 8 months ago
- Algebraic value editing in pretrained language models☆64Updated last year
- General-purpose activation steering library☆56Updated 3 months ago
- LoFiT: Localized Fine-tuning on LLM Representations☆35Updated 2 months ago
- ☆21Updated last month
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆90Updated 10 months ago
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆107Updated this week
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆80Updated 11 months ago
- ☆50Updated last year
- A library for efficient patching and automatic circuit discovery.☆62Updated last month
- Function Vectors in Large Language Models (ICLR 2024)☆156Updated 3 weeks ago
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆11Updated 2 months ago
- [ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"☆52Updated last month
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆107Updated last year
- ☆54Updated 2 years ago
- ☆14Updated 10 months ago