thestephencasper / everything-you-need
we got you bro
☆35Updated 7 months ago
Alternatives and similar repositories for everything-you-need:
Users that are interested in everything-you-need are comparing it to the libraries listed below
- ☆26Updated 11 months ago
- ☆26Updated last year
- ☆124Updated this week
- ☆61Updated 4 months ago
- ☆121Updated last year
- Universal Neurons in GPT2 Language Models☆27Updated 9 months ago
- Tools for studying developmental interpretability in neural networks.☆86Updated last month
- 🧠 Starter templates for doing interpretability research☆67Updated last year
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆26Updated 9 months ago
- Mechanistic Interpretability for Transformer Models☆50Updated 2 years ago
- Redwood Research's transformer interpretability tools☆14Updated 2 years ago
- Sparse Autoencoder Training Library☆43Updated 4 months ago
- A dataset of alignment research and code to reproduce it☆74Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingface☆90Updated last month
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆18Updated last year
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆161Updated this week
- Open source replication of Anthropic's Crosscoders for Model Diffing☆45Updated 4 months ago
- ☆34Updated 3 weeks ago
- ☆53Updated 5 months ago
- Improving Steering Vectors by Targeting Sparse Autoencoder Features☆16Updated 4 months ago
- ☆71Updated this week
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆117Updated 2 years ago
- Measuring the situational awareness of language models☆34Updated last year
- PyTorch and NNsight implementation of AtP* (Kramar et al 2024, DeepMind)☆18Updated 2 months ago
- ☆65Updated last month
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆189Updated 3 months ago
- Mechanistic Interpretability Visualizations using React☆233Updated 3 months ago
- Sparse and discrete interpretability tool for neural networks☆59Updated last year
- Code for our paper "Decomposing The Dark Matter of Sparse Autoencoders"☆21Updated last month
- ☆29Updated 10 months ago