[ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.
☆109Dec 20, 2024Updated last year
Alternatives and similar repositories for ReMoE
Users that are interested in ReMoE are comparing it to the libraries listed below
Sorting:
- Official implementation for "Pruning Large Language Models with Semi-Structural Adaptive Sparse Training" (AAAI 2025)☆18Jul 1, 2025Updated 8 months ago
- ☆19Nov 5, 2024Updated last year
- Pytorch implementation of "Oscillation-Reduced MXFP4 Training for Vision Transformers" on DeiT Model Pre-training☆37Jun 20, 2025Updated 9 months ago
- Solving Token Gradient Conflict in Mixture-of-Experts for Large Vision-Language Model☆13Feb 11, 2025Updated last year
- [ ICLR 2025 ] Making LLMs More Effective with Hierarchical Mixture of LoRA Experts☆28Oct 9, 2025Updated 5 months ago
- ☆66Dec 2, 2024Updated last year
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆69Jul 30, 2024Updated last year
- Measuring the Signal to Noise Ratio in Language Model Evaluation☆29Aug 19, 2025Updated 7 months ago
- ☆63Jul 21, 2024Updated last year
- A 8-/16-/32-/64-bit floating point number family☆16Feb 4, 2022Updated 4 years ago
- Efficient 2:4 sparse training algorithms and implementations☆59Dec 8, 2024Updated last year
- toy reproduction of Auxiliary-Loss-Free Load Balancing Strategy for Mixture-of-Experts☆31Sep 1, 2024Updated last year
- ☆21Oct 22, 2025Updated 5 months ago
- [ICML 2025 Oral] Mixture of Lookup Experts☆72Dec 3, 2025Updated 3 months ago
- Sparse Backpropagation for Mixture-of-Expert Training☆29Jul 2, 2024Updated last year
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆30Nov 22, 2025Updated 4 months ago
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆30Nov 12, 2024Updated last year
- [CoLM 24] Official Repository of MambaByte: Token-free Selective State Space Model☆24Oct 12, 2024Updated last year
- ☆91Aug 18, 2024Updated last year
- ☆19Mar 25, 2025Updated 11 months ago
- ☆96Dec 6, 2024Updated last year
- [ACL 2023 Findings] Emergent Modularity in Pre-trained Transformers☆26Jun 7, 2023Updated 2 years ago
- MoH: Multi-Head Attention as Mixture-of-Head Attention☆305Oct 29, 2024Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Feb 28, 2023Updated 3 years ago
- ☆133Jun 6, 2025Updated 9 months ago
- ☆22Dec 11, 2024Updated last year
- [ICLR 2025] Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization☆25Oct 5, 2025Updated 5 months ago
- ☆18Aug 19, 2024Updated last year
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆156Jul 9, 2025Updated 8 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆270Oct 3, 2025Updated 5 months ago
- Code for experiments on transformers using Markovian data.☆22Nov 22, 2024Updated last year
- ☆146Sep 12, 2025Updated 6 months ago
- Code for "Accelerating Transformer Pre-training with 2:4 Sparsity"☆27Dec 8, 2024Updated last year
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆227Nov 4, 2025Updated 4 months ago
- LibMoE: A LIBRARY FOR COMPREHENSIVE BENCHMARKING MIXTURE OF EXPERTS IN LARGE LANGUAGE MODELS☆46Jan 10, 2026Updated 2 months ago
- Codes accompanying the paper "Toward Guidance-Free AR Visual Generation via Condition Contrastive Alignment"☆36Feb 11, 2025Updated last year
- [NeurIPS 2025] Official implementation for our paper "Scaling Diffusion Transformers Efficiently via μP".☆95Nov 2, 2025Updated 4 months ago
- ☆19Nov 4, 2025Updated 4 months ago
- LongAttn :Selecting Long-context Training Data via Token-level Attention☆15Jul 16, 2025Updated 8 months ago