ajyl / dpo_toxic
A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.
☆61Updated 2 months ago
Alternatives and similar repositories for dpo_toxic:
Users that are interested in dpo_toxic are comparing it to the libraries listed below
- ☆34Updated 11 months ago
- Official Code for Paper: Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆65Updated 3 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆63Updated 6 months ago
- Röttger et al. (2023): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆77Updated last year
- ☆44Updated last year
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆18Updated 2 months ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆57Updated 3 months ago
- ☆51Updated last year
- A resource repository for representation engineering in large language models☆90Updated 2 months ago
- ☆29Updated 8 months ago
- ☆20Updated 6 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆131Updated 3 months ago
- ☆85Updated last year
- ☆44Updated 6 months ago
- Steering Llama 2 with Contrastive Activation Addition☆113Updated 7 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆85Updated last year
- AI Logging for Interpretability and Explainability🔬☆97Updated 7 months ago
- ☆43Updated 5 months ago
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆75Updated 8 months ago
- ☆61Updated last year
- Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆39Updated last month
- ☆24Updated 3 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆66Updated 2 years ago
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆20Updated 7 months ago
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆23Updated 6 months ago
- ☆47Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆111Updated 4 months ago
- Official implementation of ICLR'24 paper, "Curiosity-driven Red Teaming for Large Language Models" (https://openreview.net/pdf?id=4KqkizX…☆67Updated 10 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆88Updated 7 months ago
- Landing Page for TOFU☆107Updated 3 weeks ago