LRudL / sadLinks
Situational Awareness Dataset
☆42Updated last year
Alternatives and similar repositories for sad
Users that are interested in sad are comparing it to the libraries listed below
Sorting:
- Measuring the situational awareness of language models☆39Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- Sparse Autoencoder Training Library☆56Updated 7 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆84Updated last year
- A library for efficient patching and automatic circuit discovery.☆82Updated 5 months ago
- ☆36Updated last year
- ☆33Updated 10 months ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆28Updated last year
- A TinyStories LM with SAEs and transcoders☆14Updated 8 months ago
- Functional Benchmarks and the Reasoning Gap☆90Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingface☆134Updated 10 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated 2 years ago
- Code to enable layer-level steering in LLMs using sparse auto encoders☆28Updated 3 months ago
- Official Code for our paper: "Language Models Learn to Mislead Humans via RLHF""☆18Updated last year
- ☆17Updated 2 years ago
- ☆75Updated last year
- CausalGym: Benchmarking causal interpretability methods on linguistic tasks☆51Updated last year
- A tiny easily hackable implementation of a feature dashboard.☆15Updated 2 months ago
- ☆125Updated 10 months ago
- ☆144Updated 3 months ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆122Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆61Updated last year
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆123Updated last year
- Code repo for the model organisms and convergent directions of EM papers.☆40Updated 3 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆234Updated last week
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆99Updated 2 years ago
- Code for Preventing Language Models From Hiding Their Reasoning, which evaluates defenses against LLM steganography.☆24Updated last year
- Algebraic value editing in pretrained language models☆66Updated 2 years ago
- PyTorch and NNsight implementation of AtP* (Kramar et al 2024, DeepMind)☆20Updated 11 months ago