LRudL / sadLinks
Situational Awareness Dataset
☆33Updated 5 months ago
Alternatives and similar repositories for sad
Users that are interested in sad are comparing it to the libraries listed below
Sorting:
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆27Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆70Updated 11 months ago
- ☆44Updated last year
- A library for efficient patching and automatic circuit discovery.☆65Updated last month
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆77Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆92Updated this week
- Steering vectors for transformer language models in Pytorch / Huggingface☆103Updated 3 months ago
- ☆26Updated 3 months ago
- Code to enable layer-level steering in LLMs using sparse auto encoders☆18Updated last month
- Official Code for our paper: "Language Models Learn to Mislead Humans via RLHF""☆14Updated 7 months ago
- ☆70Updated 4 months ago
- ☆62Updated 2 weeks ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆75Updated 6 months ago
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆34Updated last year
- Evaluating the Moral Beliefs Encoded in LLMs☆26Updated 5 months ago
- Sparse Autoencoder Training Library☆52Updated last month
- ☆28Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆155Updated last year
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆107Updated last year
- Algebraic value editing in pretrained language models☆65Updated last year
- ☆34Updated 2 weeks ago
- A mechanistic approach for understanding and detecting factual errors of large language models.☆46Updated 11 months ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆98Updated 3 months ago
- Open source replication of Anthropic's Crosscoders for Model Diffing☆55Updated 7 months ago
- ☆84Updated 10 months ago
- PyTorch and NNsight implementation of AtP* (Kramar et al 2024, DeepMind)☆18Updated 4 months ago
- Open Source Replication of Anthropic's Alignment Faking Paper☆11Updated 2 months ago
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆11Updated 4 months ago
- A collection of different ways to implement accessing and modifying internal model activations for LLMs☆17Updated 7 months ago
- Attribution-based Parameter Decomposition☆24Updated this week