aogara-ds / hoodwinked-website
A text-based game where language models learn to lie and to detect lies.
☆12Updated last year
Alternatives and similar repositories for hoodwinked-website:
Users that are interested in hoodwinked-website are comparing it to the libraries listed below
- Measuring the situational awareness of language models☆34Updated last year
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆26Updated 11 months ago
- A library for efficient patching and automatic circuit discovery.☆63Updated this week
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆31Updated 10 months ago
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆18Updated last year
- Mechanistic Interpretability for Transformer Models☆50Updated 2 years ago
- ☆26Updated last year
- ☆54Updated 7 months ago
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆74Updated last year
- PyTorch and NNsight implementation of AtP* (Kramar et al 2024, DeepMind)☆18Updated 3 months ago
- ☆42Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆69Updated 10 months ago
- ☆34Updated last year
- Algebraic value editing in pretrained language models☆63Updated last year
- Redwood Research's transformer interpretability tools☆14Updated 3 years ago
- Official Code for our paper: "Language Models Learn to Mislead Humans via RLHF""☆13Updated 6 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆95Updated 2 months ago
- A collection of different ways to implement accessing and modifying internal model activations for LLMs☆15Updated 6 months ago
- ☆133Updated 5 months ago
- ☆73Updated this week
- ☆25Updated last year
- Experiments with representation engineering☆11Updated last year
- ☆82Updated 8 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆91Updated last year
- ☆13Updated 8 months ago
- ☆49Updated 3 months ago
- ☆114Updated 8 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆53Updated last year
- ☆219Updated 6 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆73Updated 4 months ago