PAIR-code / pretraining-tdaLinks
☆20Updated 3 months ago
Alternatives and similar repositories for pretraining-tda
Users that are interested in pretraining-tda are comparing it to the libraries listed below
Sorting:
- Simple and scalable tools for data-driven pretraining data selection.☆24Updated 3 months ago
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆38Updated 2 years ago
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆11Updated 4 months ago
- A library for efficient patching and automatic circuit discovery.☆65Updated last month
- ☆94Updated last year
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆26Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆89Updated last week
- ☆83Updated 9 months ago
- ☆29Updated 10 months ago
- ☆44Updated last year
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- Landing page for MIB: A Mechanistic Interpretability Benchmark☆10Updated last week
- ☆72Updated last year
- AI Logging for Interpretability and Explainability🔬☆119Updated 11 months ago
- Sparse probing paper full code.☆56Updated last year
- ☆12Updated last year
- Code for the paper "A Mechanistic Interpretation of Arithmetic Reasoning in Language Models using Causal Mediation Analysis"☆18Updated 3 months ago
- Forcing Diffuse Distributions out of Language Models☆15Updated 8 months ago
- General-purpose activation steering library☆75Updated 3 weeks ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆91Updated 3 years ago
- Open source replication of Anthropic's Crosscoders for Model Diffing☆55Updated 7 months ago
- ☆34Updated 2 weeks ago
- ☆38Updated last year
- Data and code for the preprint "In-Context Learning with Long-Context Models: An In-Depth Exploration"☆35Updated 9 months ago
- ☆50Updated last year
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆76Updated last year
- ☆19Updated last year
- Code repository for the paper "Mission: Impossible Language Models."☆52Updated last month
- ☆34Updated last year
- How do transformer LMs encode relations?☆48Updated last year