PAIR-code / pretraining-tdaLinks
β26Updated 8 months ago
Alternatives and similar repositories for pretraining-tda
Users that are interested in pretraining-tda are comparing it to the libraries listed below
Sorting:
- AI Logging for Interpretability and Explainabilityπ¬β129Updated last year
- β97Updated last year
- A library for efficient patching and automatic circuit discovery.β77Updated 2 months ago
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the pβ¦β12Updated 8 months ago
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.β56Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methodsβ136Updated 3 months ago
- β48Updated last year
- [ICLR 2025] General-purpose activation steering libraryβ108Updated 3 weeks ago
- β174Updated 10 months ago
- Steering Llama 2 with Contrastive Activation Additionβ187Updated last year
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".β80Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingfaceβ125Updated 7 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models β¦β215Updated last week
- β187Updated 2 months ago
- β127Updated last week
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.β80Updated 7 months ago
- Function Vectors in Large Language Models (ICLR 2024)β180Updated 5 months ago
- Open source replication of Anthropic's Crosscoders for Model Diffingβ59Updated 11 months ago
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.β154Updated this week
- β91Updated last year
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)β76Updated last year
- β231Updated last year
- β56Updated 2 years ago
- Algebraic value editing in pretrained language modelsβ66Updated last year
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Leβ¦β97Updated 4 years ago
- Forcing Diffuse Distributions out of Language Modelsβ17Updated last year
- β52Updated 2 months ago
- Sparse probing paper full code.β61Updated last year
- β106Updated 8 months ago
- Code for the paper "A Mechanistic Interpretation of Arithmetic Reasoning in Language Models using Causal Mediation Analysis"β19Updated 3 months ago