PAIR-code / pretraining-tdaLinks
β22Updated 5 months ago
Alternatives and similar repositories for pretraining-tda
Users that are interested in pretraining-tda are comparing it to the libraries listed below
Sorting:
- AI Logging for Interpretability and Explainabilityπ¬β125Updated last year
- β96Updated last year
- Steering Llama 2 with Contrastive Activation Additionβ167Updated last year
- β109Updated 3 weeks ago
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.β52Updated 10 months ago
- β89Updated last year
- A library for efficient patching and automatic circuit discovery.β73Updated 2 weeks ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models β¦β202Updated this week
- Simple and scalable tools for data-driven pretraining data selection.β24Updated 2 months ago
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the pβ¦β12Updated 6 months ago
- β13Updated last year
- Forcing Diffuse Distributions out of Language Modelsβ17Updated 10 months ago
- Function Vectors in Large Language Models (ICLR 2024)β175Updated 3 months ago
- [ICLR 2025] General-purpose activation steering libraryβ87Updated last week
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Leβ¦β94Updated 3 years ago
- Sparse probing paper full code.β58Updated last year
- Open source replication of Anthropic's Crosscoders for Model Diffingβ57Updated 9 months ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".β78Updated last year
- β103Updated 5 months ago
- Algebraic value editing in pretrained language modelsβ65Updated last year
- β47Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingfaceβ120Updated 5 months ago
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.β141Updated this week
- β121Updated last year
- Sparse Autoencoder Training Libraryβ54Updated 3 months ago
- β157Updated 8 months ago
- β47Updated 2 weeks ago
- How do transformer LMs encode relations?β52Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methodsβ112Updated last month
- β124Updated last year