lucidrains / soft-moe-pytorch
Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch
☆256Updated 8 months ago
Alternatives and similar repositories for soft-moe-pytorch:
Users that are interested in soft-moe-pytorch are comparing it to the libraries listed below
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆297Updated 7 months ago
- ☆171Updated 3 months ago
- When do we not need larger vision models?☆354Updated last month
- Implementation of Infini-Transformer in Pytorch☆107Updated 2 weeks ago
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆118Updated 5 months ago
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆69Updated last year
- [ECCV 2024] Official PyTorch implementation of RoPE-ViT "Rotary Position Embedding for Vision Transformer"☆266Updated 3 weeks ago
- Some preliminary explorations of Mamba's context scaling.☆206Updated 11 months ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆112Updated 3 months ago
- Official code for "TOAST: Transfer Learning via Attention Steering"☆186Updated last year
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆205Updated 4 months ago
- A framework for merging models solving different tasks with different initializations into one multi-task model without any additional tr…☆291Updated 11 months ago
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆209Updated 7 months ago
- Reading list for research topics in state-space models☆253Updated 3 weeks ago
- Implementation of Zorro, Masked Multimodal Transformer, in Pytorch☆94Updated last year
- When it comes to optimizers, it's always better to be safe than sorry☆157Updated this week
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆115Updated 4 months ago
- ☆240Updated 4 months ago
- The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for Contrastive Loss". A su…☆218Updated this week
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆92Updated 4 months ago
- ☆185Updated last year
- Implementation of 🌻 Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorch☆88Updated last year
- ☆180Updated this week
- Implementation of a multimodal diffusion transformer in Pytorch☆99Updated 6 months ago
- Language Quantized AutoEncoders☆95Updated last year
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆188Updated 2 weeks ago
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆247Updated last year
- ☆98Updated 10 months ago
- Code release for "Dropout Reduces Underfitting"☆311Updated last year
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆403Updated last week