aogara-ds / hoodwinked-website
A text-based game where language models learn to lie and to detect lies.
☆11Updated 11 months ago
Related projects: ⓘ
- ☆75Updated this week
- Steering Llama 2 with Contrastive Activation Addition☆83Updated 3 months ago
- ☆33Updated 3 months ago
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆22Updated 3 months ago
- A library for efficient patching and automatic circuit discovery.☆18Updated 3 weeks ago
- Algebraic value editing in pretrained language models☆54Updated 10 months ago
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆14Updated 8 months ago
- NeuroSurgeon is a package that enables researchers to uncover and manipulate subnetworks within models in Huggingface Transformers☆33Updated last month
- ☆91Updated last month
- ☆174Updated 4 months ago
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆76Updated 3 weeks ago
- AI Logging for Interpretability and Explainability🔬☆74Updated 3 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆73Updated last year
- ☆24Updated 5 months ago
- ☆68Updated 7 months ago
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆59Updated 10 months ago
- Sparse probing paper full code.☆47Updated 9 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆78Updated last week
- Röttger et al. (2023): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆55Updated 8 months ago
- Mechanistic Interpretability for Transformer Models☆48Updated 2 years ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆56Updated 3 months ago
- ☆11Updated last year
- (Model-written) LLM evals library☆14Updated last month
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆46Updated last month
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆16Updated 3 months ago
- ☆74Updated this week
- ☆99Updated 10 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆70Updated 5 months ago
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆26Updated last month
- Mechanistic Interpretability Visualizations using React☆175Updated 2 months ago
- Improving Alignment and Robustness with Circuit Breakers☆124Updated 2 months ago