MadryLab / modelcomponentsLinks
Decomposing and Editing Predictions by Modeling Model Computation
☆139Updated last year
Alternatives and similar repositories for modelcomponents
Users that are interested in modelcomponents are comparing it to the libraries listed below
Sorting:
- Official implementation of MAIA, A Multimodal Automated Interpretability Agent☆102Updated 3 months ago
- Official PyTorch Implementation for Vision-Language Models Create Cross-Modal Task Representations, ICML 2025☆32Updated 9 months ago
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆231Updated 3 months ago
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆119Updated last year
- Sparse and discrete interpretability tool for neural networks☆64Updated last year
- [ICML 2025] Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction☆84Updated 8 months ago
- 👋 Overcomplete is a Vision-based SAE Toolbox☆119Updated 2 months ago
- ☆33Updated last year
- PyTorch library for Active Fine-Tuning☆96Updated 4 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆83Updated last year
- Official code for the ICML 2024 paper "The Entropy Enigma: Success and Failure of Entropy Minimization"☆55Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆135Updated 3 months ago
- [ICCV 2025] Auto Interpretation Pipeline and many other functionalities for Multimodal SAE Analysis.☆175Updated 4 months ago
- Towards Understanding the Mixture-of-Experts Layer in Deep Learning☆35Updated 2 years ago
- ☆91Updated last year
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆182Updated 7 months ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆31Updated last year
- ☆50Updated last year
- ☆208Updated 2 years ago
- [COLING'25] Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers?☆82Updated last year
- A curated list of Model Merging methods.☆96Updated 2 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆241Updated last week
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆237Updated 3 months ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆42Updated 3 weeks ago
- Awesome list of papers that extend Mamba to various applications.☆138Updated 7 months ago
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆46Updated 3 months ago
- Optimal Transport in the Big Data Era☆116Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆195Updated last year
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆114Updated last week
- ☆51Updated 2 years ago