ArthurConmy / MishformerLens
MishformerLens intends to be a drop-in replacement for TransformerLens that AST patches HuggingFace Transformers rather than implementing a custom, numerically inaccurate Transformer architecture.
☆10Updated 6 months ago
Alternatives and similar repositories for MishformerLens:
Users that are interested in MishformerLens are comparing it to the libraries listed below
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆31Updated 10 months ago
- PyTorch and NNsight implementation of AtP* (Kramar et al 2024, DeepMind)☆18Updated 3 months ago
- ☆90Updated 2 months ago
- ☆11Updated last month
- A library for efficient patching and automatic circuit discovery.☆63Updated this week
- Sparse Autoencoder Training Library☆48Updated 5 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆169Updated this week
- ☆29Updated last week
- How do transformer LMs encode relations?☆47Updated last year
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆25Updated last year
- Experiments for efforts to train a new and improved t5☆77Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingface☆95Updated 2 months ago
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆18Updated last year
- ☆104Updated 5 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆73Updated 4 months ago
- Repository for the code of the "PPL-MCTS: Constrained Textual Generation Through Discriminator-Guided Decoding" paper, NAACL'22☆65Updated 2 years ago
- ☆128Updated 3 weeks ago
- ☆36Updated 2 months ago
- Arrakis is a library to conduct, track and visualize mechanistic interpretability experiments.☆28Updated this week
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆197Updated 4 months ago
- Experiments with representation engineering☆11Updated last year
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆120Updated 2 years ago
- ☆12Updated 2 weeks ago
- Repository for "I am a Strange Dataset: Metalinguistic Tests for Language Models"☆43Updated last year
- A TinyStories LM with SAEs and transcoders☆11Updated 3 weeks ago
- ☆9Updated 4 months ago
- A collection of different ways to implement accessing and modifying internal model activations for LLMs☆15Updated 6 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆105Updated 5 months ago
- ☆80Updated 3 months ago
- ☆121Updated last year