tim-lawson / mlsaeLinks
Multi-Layer Sparse Autoencoders (ICLR 2025)
☆22Updated 4 months ago
Alternatives and similar repositories for mlsae
Users that are interested in mlsae are comparing it to the libraries listed below
Sorting:
- Open source replication of Anthropic's Crosscoders for Model Diffing☆55Updated 7 months ago
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆27Updated last year
- ☆44Updated 7 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆95Updated 2 weeks ago
- A library for efficient patching and automatic circuit discovery.☆67Updated 2 months ago
- Improving Steering Vectors by Targeting Sparse Autoencoder Features☆20Updated 7 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆75Updated 6 months ago
- ☆34Updated 5 months ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆37Updated 7 months ago
- ☆95Updated 4 months ago
- Sparse Autoencoder Training Library☆52Updated last month
- ☆23Updated 4 months ago
- ☆95Updated last year
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆11Updated 4 months ago
- ☆173Updated 2 months ago
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆47Updated 8 months ago
- General-purpose activation steering library☆78Updated last month
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆35Updated last year
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆185Updated this week
- Steering Llama 2 with Contrastive Activation Addition☆158Updated last year
- Code for "Reasoning to Learn from Latent Thoughts"☆104Updated 2 months ago
- ☆14Updated last year
- ☆85Updated 10 months ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆27Updated last year
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆69Updated 5 months ago
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆56Updated 3 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆88Updated 8 months ago
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆121Updated this week
- ☆131Updated 7 months ago
- ☆101Updated 3 weeks ago