meg-tong / sycophancy-evalLinks
datasets from the paper "Towards Understanding Sycophancy in Language Models"
☆86Updated last year
Alternatives and similar repositories for sycophancy-eval
Users that are interested in sycophancy-eval are comparing it to the libraries listed below
Sorting:
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆113Updated last year
- ☆89Updated 11 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆72Updated last year
- A library for efficient patching and automatic circuit discovery.☆73Updated 2 weeks ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆96Updated last year
- ☆121Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆112Updated last month
- ☆99Updated last year
- Inspecting and Editing Knowledge Representations in Language Models☆116Updated 2 years ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆119Updated 5 months ago
- ☆137Updated 2 weeks ago
- PASTA: Post-hoc Attention Steering for LLMs☆122Updated 8 months ago
- This repository includes code for the paper "Does Localization Inform Editing? Surprising Differences in Where Knowledge Is Stored vs. Ca…☆61Updated 2 years ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆28Updated last year
- Function Vectors in Large Language Models (ICLR 2024)☆175Updated 3 months ago
- Steering Llama 2 with Contrastive Activation Addition☆167Updated last year
- ☆84Updated 6 months ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆78Updated last year
- Evaluating LLMs with fewer examples☆160Updated last year
- A repository for transformer critique learning and generation☆90Updated last year
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆162Updated last year
- ☆95Updated 3 months ago
- The Prism Alignment Project☆79Updated last year
- Open source replication of Anthropic's Crosscoders for Model Diffing☆57Updated 9 months ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆209Updated last week
- Code accompanying the paper Pretraining Language Models with Human Preferences☆182Updated last year
- ☆154Updated 8 months ago
- Algebraic value editing in pretrained language models☆65Updated last year
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆141Updated this week
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆200Updated this week