MadryLab / modelcomponents
Decomposing and Editing Predictions by Modeling Model Computation
☆97Updated 3 months ago
Related projects: ⓘ
- Towards Understanding the Mixture-of-Experts Layer in Deep Learning☆19Updated 9 months ago
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆186Updated 3 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆104Updated 6 months ago
- Official implementation of MAIA, A Multimodal Automated Interpretability Agent☆56Updated last month
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆61Updated this week
- Kolmogorov-Arnold Transformer: A PyTorch Implementation with CUDA kernel☆221Updated this week
- A curated reading list of research in Adaptive Computation, Dynamic Compute & Mixture of Experts (MoE). Inference time compute as seen in…☆123Updated last month
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆153Updated last week
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆94Updated last month
- ☆136Updated 7 months ago
- A curated list of Model Merging methods.☆71Updated this week
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆62Updated 11 months ago
- ☆68Updated 7 months ago
- ☆42Updated 3 months ago
- ☆57Updated last week
- Official code for the ICML 2024 paper "The Entropy Enigma: Success and Failure of Entropy Minimization"☆44Updated 3 months ago
- Understand and test language model architectures on synthetic tasks.☆156Updated 4 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆57Updated last week
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆120Updated last week
- The official repository for HyperZ⋅Z⋅W Operator Connects Slow-Fast Networks for Full Context Interaction.☆29Updated this week
- Implementation of 🌻 Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorch☆87Updated 8 months ago
- Awesome list of papers that extend Mamba to various applications.☆124Updated 2 weeks ago
- ☆18Updated 9 months ago
- 94% on CIFAR-10 in 3.09 seconds 💨 96% in 27 seconds☆127Updated last month
- ☆190Updated last week
- Official code for "TOAST: Transfer Learning via Attention Steering"☆186Updated last year
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆119Updated 11 months ago
- ☆68Updated 3 weeks ago
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆36Updated 2 years ago
- Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆130Updated this week