facebookresearch / Mixture-of-TransformersLinks
Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models. TMLR 2025.
☆121Updated 2 months ago
Alternatives and similar repositories for Mixture-of-Transformers
Users that are interested in Mixture-of-Transformers are comparing it to the libraries listed below
Sorting:
- The official github repo for "Diffusion Language Models are Super Data Learners".☆186Updated last week
- Python Library to evaluate VLM models' robustness across diverse benchmarks☆219Updated 3 weeks ago
- [ICML 2025] Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction☆74Updated 5 months ago
- [ICCV 2025] Auto Interpretation Pipeline and many other functionalities for Multimodal SAE Analysis.☆161Updated last month
- Official PyTorch Implementation for Vision-Language Models Create Cross-Modal Task Representations, ICML 2025☆31Updated 6 months ago
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆132Updated last week
- Official PyTorch implementation and models for paper "Diffusion Beats Autoregressive in Data-Constrained Settings". We find diffusion mod…☆105Updated 2 weeks ago
- ☆34Updated 6 months ago
- [ICLR 2025] Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegr…☆77Updated 11 months ago
- Code and weights for the paper "Cluster and Predict Latents Patches for Improved Masked Image Modeling"☆123Updated 7 months ago
- [NeurIPS 2025 Oral] Exploring Diffusion Transformer Designs via Grafting☆61Updated 4 months ago
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆116Updated last year
- Implementation of TiTok, proposed by Bytedance in "An Image is Worth 32 Tokens for Reconstruction and Generation"☆181Updated last year
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆53Updated 7 months ago
- ☆20Updated 2 months ago
- ☆122Updated last month
- ☆106Updated 7 months ago
- Code for the paper "Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers" [ICCV 2025]☆93Updated 3 months ago
- Geometric-Mean Policy Optimization☆90Updated this week
- Official Implementation of LaViDa: :A Large Diffusion Language Model for Multimodal Understanding☆165Updated 3 weeks ago
- Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"☆56Updated last year
- Implementation of a multimodal diffusion transformer in Pytorch☆106Updated last year
- ☆68Updated 2 months ago
- GPU-optimized framework for training diffusion language models at any scale. The backend of Quokka, Super Data Learners, and OpenMoE 2 tr…☆259Updated this week
- Official implementation of the paper: "ZClip: Adaptive Spike Mitigation for LLM Pre-Training".☆138Updated last month
- LL3M: Large Language and Multi-Modal Model in Jax☆74Updated last year
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆179Updated 4 months ago
- ☆87Updated last year
- Large multi-modal models (L3M) pre-training.☆221Updated last month
- Matryoshka Multimodal Models☆115Updated 9 months ago