tim-lawson / mlsaeLinks
Multi-Layer Sparse Autoencoders (ICLR 2025)
☆28Updated 10 months ago
Alternatives and similar repositories for mlsae
Users that are interested in mlsae are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆74Updated 6 months ago
- Open source replication of Anthropic's Crosscoders for Model Diffing☆63Updated last year
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆83Updated last year
- ☆112Updated 10 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆189Updated 8 months ago
- ☆56Updated 11 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆161Updated 6 months ago
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆44Updated last year
- ☆58Updated last year
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆234Updated last week
- Sparse Autoencoder Training Library☆56Updated 8 months ago
- A library for efficient patching and automatic circuit discovery.☆84Updated 5 months ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆41Updated last year
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆57Updated 2 months ago
- ☆138Updated last week
- Tools for optimizing steering vectors in LLMs.☆15Updated 8 months ago
- ☆83Updated 2 weeks ago
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆63Updated 4 months ago
- Improving Steering Vectors by Targeting Sparse Autoencoder Features☆25Updated last year
- ☆97Updated last year
- ☆23Updated 11 months ago
- ☆197Updated 2 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆151Updated 5 months ago
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆28Updated 2 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆86Updated last year
- Collection of Reverse Engineering in Large Model☆36Updated 11 months ago
- ☆89Updated 9 months ago
- Universal Neurons in GPT2 Language Models☆31Updated last year
- ☆33Updated 11 months ago
- Test-time-training on nearest neighbors for large language models☆49Updated last year