ajyl / dpo_toxicLinks
A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.
☆84Updated 8 months ago
Alternatives and similar repositories for dpo_toxic
Users that are interested in dpo_toxic are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] General-purpose activation steering library☆123Updated 2 months ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆77Updated last year
- ☆102Updated 2 years ago
- ☆46Updated last year
- ☆29Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆116Updated 9 months ago
- A resource repository for representation engineering in large language models☆142Updated last year
- ☆68Updated last year
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆87Updated 8 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆67Updated last year
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆57Updated last month
- ☆51Updated 2 years ago
- ☆61Updated 4 months ago
- Algebraic value editing in pretrained language models☆66Updated 2 years ago
- ☆66Updated 8 months ago
- ☆51Updated last year
- Function Vectors in Large Language Models (ICLR 2024)☆186Updated 7 months ago
- ☆57Updated 2 years ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆82Updated 11 months ago
- Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization☆37Updated last year
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆12Updated 10 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆100Updated 2 years ago
- AI Logging for Interpretability and Explainability🔬☆133Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆195Updated last year
- ☆59Updated 2 years ago
- ☆42Updated last year
- Code for the paper "Spectral Editing of Activations for Large Language Model Alignments"☆29Updated 11 months ago
- [EMNLP 2025 Main] ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆38Updated 3 months ago
- LoFiT: Localized Fine-tuning on LLM Representations☆45Updated 10 months ago