tim-lawson / mlsaeLinks
Multi-Layer Sparse Autoencoders (ICLR 2025)
☆28Updated 11 months ago
Alternatives and similar repositories for mlsae
Users that are interested in mlsae are comparing it to the libraries listed below
Sorting:
- Open source replication of Anthropic's Crosscoders for Model Diffing☆63Updated last year
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆74Updated 6 months ago
- ☆57Updated last year
- ☆58Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆161Updated 6 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆83Updated last year
- Function Vectors in Large Language Models (ICLR 2024)☆190Updated 9 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆236Updated last week
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆41Updated last year
- A library for efficient patching and automatic circuit discovery.☆85Updated 3 weeks ago
- ☆113Updated 11 months ago
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆45Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆94Updated last year
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆57Updated 2 months ago
- Tools for optimizing steering vectors in LLMs.☆18Updated 9 months ago
- Gemstones: A Model Suite for Multi-Faceted Scaling Laws (NeurIPS 2025)☆31Updated 3 months ago
- ☆142Updated 3 weeks ago
- ☆52Updated 9 months ago
- ☆89Updated 9 months ago
- Improving Steering Vectors by Targeting Sparse Autoencoder Features☆26Updated last year
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆30Updated 2 months ago
- Sparse Autoencoder Training Library☆56Updated 8 months ago
- Universal Neurons in GPT2 Language Models☆31Updated last year
- ☆98Updated last year
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆64Updated 5 months ago
- Code for reproducing our paper "Low Rank Adapting Models for Sparse Autoencoder Features"☆17Updated 9 months ago
- Collection of Reverse Engineering in Large Model☆36Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆87Updated last year
- Code for "Reasoning to Learn from Latent Thoughts"☆124Updated 9 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆152Updated 6 months ago