PAIR-code / pretraining-tdaLinks
β29Updated 9 months ago
Alternatives and similar repositories for pretraining-tda
Users that are interested in pretraining-tda are comparing it to the libraries listed below
Sorting:
- AI Logging for Interpretability and Explainabilityπ¬β133Updated last year
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.β56Updated 3 weeks ago
- A library for efficient patching and automatic circuit discovery.β80Updated 4 months ago
- [ICLR 2025] General-purpose activation steering libraryβ119Updated 2 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models β¦β225Updated last week
- Steering Llama 2 with Contrastive Activation Additionβ193Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methodsβ141Updated 4 months ago
- β136Updated this week
- Sparse probing paper full code.β65Updated last year
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the pβ¦β12Updated 9 months ago
- β101Updated 2 years ago
- β94Updated last year
- β188Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.β84Updated 8 months ago
- β110Updated 9 months ago
- Performant framework for training, analyzing and visualizing Sparse Autoencoders (SAEs) and their frontier variants.β163Updated this week
- Function Vectors in Large Language Models (ICLR 2024)β184Updated 7 months ago
- β60Updated 3 months ago
- β129Updated last year
- β19Updated 2 months ago
- β195Updated last month
- Open source replication of Anthropic's Crosscoders for Model Diffingβ60Updated last year
- β237Updated last year
- β51Updated 2 years ago
- Algebraic value editing in pretrained language modelsβ66Updated 2 years ago
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"β42Updated last year
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".β80Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingfaceβ129Updated 9 months ago
- How do transformer LMs encode relations?β55Updated last year
- Simple and scalable tools for data-driven pretraining data selection.β29Updated 5 months ago