ArthurConmy / MishformerLensLinks
MishformerLens intends to be a drop-in replacement for TransformerLens that AST patches HuggingFace Transformers rather than implementing a custom, numerically inaccurate Transformer architecture.
☆10Updated 10 months ago
Alternatives and similar repositories for MishformerLens
Users that are interested in MishformerLens are comparing it to the libraries listed below
Sorting:
- A tiny easily hackable implementation of a feature dashboard.☆12Updated last month
- graphpatch is a library for activation patching on PyTorch neural network models.☆18Updated 6 months ago
- Erasing concepts from neural representations with provable guarantees☆232Updated 6 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆210Updated 7 months ago
- PyTorch and NNsight implementation of AtP* (Kramar et al 2024, DeepMind)☆18Updated 6 months ago
- How do transformer LMs encode relations?☆52Updated last year
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆202Updated last week
- ☆125Updated last year
- Sparse Autoencoder Training Library☆54Updated 3 months ago
- Mechanistic Interpretability Visualizations using React☆273Updated 7 months ago
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆40Updated last year
- A library for efficient patching and automatic circuit discovery.☆74Updated 3 weeks ago
- Engine for collecting, uploading, and downloading model activations☆20Updated 4 months ago
- ☆47Updated 2 weeks ago
- ☆104Updated 6 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆120Updated 5 months ago
- ☆14Updated last month
- ☆234Updated 10 months ago
- ☆32Updated last year
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆77Updated 8 months ago
- Utilities for the HuggingFace transformers library☆70Updated 2 years ago
- ☆9Updated 8 months ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆209Updated last week
- Extract full next-token probabilities via language model APIs☆247Updated last year
- ☆28Updated last year
- Mechanistic Interpretability for Transformer Models☆51Updated 3 years ago
- Experiments with representation engineering☆12Updated last year
- ☆109Updated 3 weeks ago
- ☆275Updated last year
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆18Updated last year