tim-lawson / mlsaeLinks
Multi-Layer Sparse Autoencoders (ICLR 2025)
☆24Updated 6 months ago
Alternatives and similar repositories for mlsae
Users that are interested in mlsae are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆70Updated 2 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆124Updated 2 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆77Updated 8 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆177Updated 4 months ago
- Open source replication of Anthropic's Crosscoders for Model Diffing☆58Updated 9 months ago
- A library for efficient patching and automatic circuit discovery.☆76Updated last month
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆59Updated last week
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆39Updated 9 months ago
- ☆103Updated 6 months ago
- Exploration of automated dataset selection approaches at large scales.☆47Updated 5 months ago
- ☆52Updated 4 months ago
- ☆50Updated last year
- ☆53Updated 9 months ago
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆27Updated last year
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆206Updated last week
- Unofficial Implementation of Selective Attention Transformer☆17Updated 9 months ago
- Test-time-training on nearest neighbors for large language models☆45Updated last year
- Code for "Reasoning to Learn from Latent Thoughts"☆116Updated 4 months ago
- ☆23Updated 6 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆81Updated 9 months ago
- ☆91Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆91Updated 9 months ago
- Stick-breaking attention☆59Updated last month
- ☆40Updated 7 months ago
- Collection of Reverse Engineering in Large Model☆34Updated 7 months ago
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆145Updated this week
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆33Updated 3 weeks ago
- ☆78Updated 4 months ago
- ☆34Updated 7 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆95Updated last month