ajyl / dpo_toxicLinks
A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.
☆72Updated 2 months ago
Alternatives and similar repositories for dpo_toxic
Users that are interested in dpo_toxic are comparing it to the libraries listed below
Sorting:
- General-purpose activation steering library☆74Updated 3 weeks ago
- ☆40Updated last year
- ☆36Updated 2 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆79Updated 2 months ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆97Updated 3 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆58Updated 6 months ago
- ☆94Updated last year
- ☆29Updated last year
- ☆58Updated 10 months ago
- ☆34Updated 2 weeks ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆67Updated 8 months ago
- Algebraic value editing in pretrained language models☆65Updated last year
- AI Logging for Interpretability and Explainability🔬☆119Updated 11 months ago
- Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization☆25Updated 10 months ago
- ☆21Updated 2 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆110Updated last year
- Augmenting Statistical Models with Natural Language Parameters☆26Updated 8 months ago
- A library for efficient patching and automatic circuit discovery.☆65Updated last month
- Steering Llama 2 with Contrastive Activation Addition☆154Updated last year
- ☆44Updated last year
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆93Updated last year
- ☆50Updated last year
- ☆41Updated 8 months ago
- ☆54Updated 2 years ago
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆25Updated 11 months ago
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆23Updated 7 months ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆34Updated 6 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆167Updated last month
- LoFiT: Localized Fine-tuning on LLM Representations☆39Updated 4 months ago
- ☆49Updated last year