MadryLab / modelcomponentsLinks
Decomposing and Editing Predictions by Modeling Model Computation
☆138Updated last year
Alternatives and similar repositories for modelcomponents
Users that are interested in modelcomponents are comparing it to the libraries listed below
Sorting:
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆110Updated 10 months ago
- Official implementation of MAIA, A Multimodal Automated Interpretability Agent☆82Updated 3 weeks ago
- [ICCV 2025] Auto Interpretation Pipeline and many other functionalities for Multimodal SAE Analysis.☆144Updated this week
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆177Updated 3 weeks ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆222Updated 2 months ago
- ☆142Updated last year
- [NeurIPS 2024] Code for the paper "Diffusion of Thoughts: Chain-of-Thought Reasoning in Diffusion Language Models"☆169Updated 4 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆169Updated last year
- [COLING'25] Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers?☆79Updated 5 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆77Updated 7 months ago
- ☆28Updated last year
- Official PyTorch Implementation for Vision-Language Models Create Cross-Modal Task Representations, ICML 2025☆27Updated 2 months ago
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆224Updated last year
- Towards Understanding the Mixture-of-Experts Layer in Deep Learning☆31Updated last year
- ☆33Updated 6 months ago
- ☆183Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆99Updated 3 weeks ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆38Updated 8 months ago
- PyTorch library for Active Fine-Tuning☆84Updated 4 months ago
- ☆82Updated 10 months ago
- Universal Neurons in GPT2 Language Models☆30Updated last year
- Code for "Reasoning to Learn from Latent Thoughts"☆112Updated 3 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆96Updated last week
- A curated list of Model Merging methods.☆92Updated 10 months ago
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- Official code for the ICML 2024 paper "The Entropy Enigma: Success and Failure of Entropy Minimization"☆53Updated last year
- ☆572Updated 3 months ago
- Optimal Transport in the Big Data Era☆106Updated 8 months ago
- ☆318Updated last month
- Unofficial Implementation of Selective Attention Transformer☆17Updated 8 months ago