evan-lloyd / graphpatchLinks
graphpatch is a library for activation patching on PyTorch neural network models.
☆20Updated 8 months ago
Alternatives and similar repositories for graphpatch
Users that are interested in graphpatch are comparing it to the libraries listed below
Sorting:
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆221Updated 10 months ago
- ☆128Updated last year
- Sparse Autoencoder for Mechanistic Interpretability☆272Updated last year
- ☆244Updated last year
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆218Updated last week
- Sparse Autoencoder Training Library☆55Updated 5 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆125Updated 7 months ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆129Updated 3 years ago
- MishformerLens intends to be a drop-in replacement for TransformerLens that AST patches HuggingFace Transformers rather than implementing…☆10Updated last year
- ☆81Updated 7 months ago
- Mechanistic Interpretability Visualizations using React☆293Updated 10 months ago
- ☆278Updated last year
- Erasing concepts from neural representations with provable guarantees☆238Updated 8 months ago
- ☆348Updated last month
- ☆142Updated last month
- ☆131Updated last week
- A collection of different ways to implement accessing and modifying internal model activations for LLMs☆19Updated last year
- Extract full next-token probabilities via language model APIs☆247Updated last year
- ☆190Updated this week
- Decoder only transformer, built from scratch with PyTorch☆31Updated last year
- ☆29Updated last year
- Open source replication of Anthropic's Crosscoders for Model Diffing☆59Updated 11 months ago
- ☆30Updated last year
- ☆53Updated 2 months ago
- Sparsify transformers with SAEs and transcoders☆640Updated last week
- Engine for collecting, uploading, and downloading model activations☆24Updated 6 months ago
- Mechanistic Interpretability for Transformer Models☆53Updated 3 years ago
- Modified to support crosscoder training.☆23Updated last week
- ☆73Updated last week
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆42Updated last year