meg-tong / sycophancy-evalLinks
datasets from the paper "Towards Understanding Sycophancy in Language Models"
☆100Updated 2 years ago
Alternatives and similar repositories for sycophancy-eval
Users that are interested in sycophancy-eval are comparing it to the libraries listed below
Sorting:
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆124Updated last year
- ☆98Updated last year
- A library for efficient patching and automatic circuit discovery.☆84Updated 2 weeks ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- ☆100Updated last year
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆100Updated 2 years ago
- Inspecting and Editing Knowledge Representations in Language Models☆119Updated 2 years ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆162Updated 6 months ago
- ☆116Updated last year
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆80Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆206Updated last year
- ☆135Updated last year
- ☆249Updated 3 years ago
- ☆85Updated 11 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆137Updated 10 months ago
- CausalGym: Benchmarking causal interpretability methods on linguistic tasks☆51Updated last year
- ☆144Updated 5 months ago
- The Prism Alignment Project☆87Updated last year
- Evaluating LLMs with fewer examples☆169Updated last year
- ☆39Updated last year
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆165Updated last year
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆28Updated last year
- Synthetic question-answering dataset to formally analyze the chain-of-thought output of large language models on a reasoning task.☆156Updated 4 months ago
- Self-Alignment with Principle-Following Reward Models☆169Updated 3 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆190Updated 8 months ago
- Algebraic value editing in pretrained language models☆67Updated 2 years ago
- Code accompanying the paper Pretraining Language Models with Human Preferences☆180Updated last year
- Exploring the Limitations of Large Language Models on Multi-Hop Queries☆29Updated 10 months ago
- PASTA: Post-hoc Attention Steering for LLMs☆132Updated last year
- ☆108Updated last year