[ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.
☆106Dec 20, 2024Updated last year
Alternatives and similar repositories for ReMoE
Users that are interested in ReMoE are comparing it to the libraries listed below
Sorting:
- ☆19Nov 5, 2024Updated last year
- ☆21Oct 22, 2025Updated 4 months ago
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆29Nov 22, 2025Updated 3 months ago
- Official implementation for "Pruning Large Language Models with Semi-Structural Adaptive Sparse Training" (AAAI 2025)☆18Jul 1, 2025Updated 8 months ago
- Measuring the Signal to Noise Ratio in Language Model Evaluation☆28Aug 19, 2025Updated 6 months ago
- ☆129Jun 6, 2025Updated 8 months ago
- ☆91Aug 18, 2024Updated last year
- Pytorch implementation of "Oscillation-Reduced MXFP4 Training for Vision Transformers" on DeiT Model Pre-training☆36Jun 20, 2025Updated 8 months ago
- ☆19Mar 25, 2025Updated 11 months ago
- Gemstones: A Model Suite for Multi-Faceted Scaling Laws (NeurIPS 2025)☆33Sep 28, 2025Updated 5 months ago
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆51Jul 17, 2022Updated 3 years ago
- Sparse Backpropagation for Mixture-of-Expert Training☆29Jul 2, 2024Updated last year
- ☆16Jul 23, 2024Updated last year
- [ICML 2025 Oral] Mixture of Lookup Experts☆72Dec 3, 2025Updated 2 months ago
- MoH: Multi-Head Attention as Mixture-of-Head Attention☆303Oct 29, 2024Updated last year
- Efficient 2:4 sparse training algorithms and implementations☆59Dec 8, 2024Updated last year
- [ICLR 2025] Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization☆25Oct 5, 2025Updated 4 months ago
- ☆65Dec 2, 2024Updated last year
- [ ICLR 2025 ] Making LLMs More Effective with Hierarchical Mixture of LoRA Experts☆28Oct 9, 2025Updated 4 months ago
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆30Nov 12, 2024Updated last year
- ☆63Jul 21, 2024Updated last year
- ☆20Nov 4, 2025Updated 3 months ago
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆67Jul 30, 2024Updated last year
- Un-*** 50 billions multimodality dataset☆23Sep 14, 2022Updated 3 years ago
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆140Updated this week
- ☆145Sep 12, 2025Updated 5 months ago
- ✨✨The Curse of Multi-Modalities (CMM): Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio☆52Jul 11, 2025Updated 7 months ago
- LibMoE: A LIBRARY FOR COMPREHENSIVE BENCHMARKING MIXTURE OF EXPERTS IN LARGE LANGUAGE MODELS☆46Jan 10, 2026Updated last month
- ☆66Jul 8, 2025Updated 7 months ago
- [CoLM 24] Official Repository of MambaByte: Token-free Selective State Space Model☆24Oct 12, 2024Updated last year
- [CVPR 2026] Official Implementation of "Interact2Ar: Full-Body Human-Human Interaction Generation via Autoregressive Diffusion Models".☆15Feb 23, 2026Updated last week
- ☆14Jan 24, 2025Updated last year
- LongAttn :Selecting Long-context Training Data via Token-level Attention☆15Jul 16, 2025Updated 7 months ago
- Learning to Skip the Middle Layers of Transformers☆17Aug 7, 2025Updated 6 months ago
- Official PyTorch implementation of CD-MOE☆12Mar 29, 2025Updated 11 months ago
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆75Jun 23, 2025Updated 8 months ago
- ☆96Dec 6, 2024Updated last year
- ☆60Jan 12, 2026Updated last month
- [ACL 2023 Findings] Emergent Modularity in Pre-trained Transformers☆26Jun 7, 2023Updated 2 years ago