MadryLab / modelcomponentsLinks
Decomposing and Editing Predictions by Modeling Model Computation
☆139Updated last year
Alternatives and similar repositories for modelcomponents
Users that are interested in modelcomponents are comparing it to the libraries listed below
Sorting:
- Official implementation of MAIA, A Multimodal Automated Interpretability Agent☆102Updated 3 months ago
- Towards Understanding the Mixture-of-Experts Layer in Deep Learning☆34Updated 2 years ago
- PyTorch library for Active Fine-Tuning☆96Updated 4 months ago
- [ICCV 2025] Auto Interpretation Pipeline and many other functionalities for Multimodal SAE Analysis.☆175Updated 4 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆83Updated last year
- Official PyTorch Implementation for Vision-Language Models Create Cross-Modal Task Representations, ICML 2025☆31Updated 9 months ago
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆231Updated 3 months ago
- ☆208Updated 2 years ago
- ☆146Updated last year
- Official Code for Paper: Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation☆133Updated last month
- Official code for the ICML 2024 paper "The Entropy Enigma: Success and Failure of Entropy Minimization"☆55Updated last year
- [COLING'25] Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers?☆82Updated last year
- [ICML 2025] Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction☆84Updated 8 months ago
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆119Updated last year
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆46Updated 3 months ago
- Holistic evaluation of multimodal foundation models☆49Updated last year
- ☆33Updated last year
- ☆91Updated last year
- Sparse and discrete interpretability tool for neural networks☆64Updated last year
- A curated list of Model Merging methods.☆96Updated 2 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆195Updated last year
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆31Updated last year
- Optimal Transport in the Big Data Era☆116Updated last year
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆182Updated 7 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆134Updated 3 months ago
- Universal Neurons in GPT2 Language Models☆30Updated last year
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆42Updated 2 weeks ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆198Updated last year
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆237Updated 3 months ago
- ☆51Updated 2 years ago