Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"
☆69Jul 30, 2024Updated last year
Alternatives and similar repositories for Dynamic_MoE
Users that are interested in Dynamic_MoE are comparing it to the libraries listed below
Sorting:
- ☆15Oct 19, 2024Updated last year
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆155Jul 9, 2025Updated 8 months ago
- Entropy-Driven GRPO with Guided Error Correction for Advantage Diversity☆22Aug 28, 2025Updated 6 months ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆109Dec 20, 2024Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆61Feb 7, 2025Updated last year
- Efficient Mixture of Experts for LLM Paper List☆168Sep 28, 2025Updated 5 months ago
- Self Reproduction Code of Paper "Reducing Transformer Key-Value Cache Size with Cross-Layer Attention (MIT CSAIL)☆17May 24, 2024Updated last year
- [ICLR 2024] Official pytorch implementation of "Denoising Task Routing for Diffusion Models"☆25Feb 19, 2024Updated 2 years ago
- ☆133Jun 6, 2025Updated 9 months ago
- [NAACL'25 🏆 SAC Award] Official code for "Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert…☆16Feb 4, 2025Updated last year
- [ACL'24] MC^2: A Multilingual Corpus of Minority Languages in China (Tibetan, Uyghur, Kazakh, and Mongolian)☆31Jan 17, 2026Updated 2 months ago
- ☆12May 20, 2025Updated 10 months ago
- [TKDE'25] The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".☆485Jul 23, 2025Updated 7 months ago
- 基于语义理解、知识图谱的聊天机器人☆28Apr 19, 2019Updated 6 years ago
- ☆17May 17, 2022Updated 3 years ago
- SysBench: Can Large Language Models Follow System Messages?☆39Sep 4, 2024Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,002Dec 6, 2024Updated last year
- Code for SIGDial 2019 Best Paper: Structured Fusion Networks for Dialog https://arxiv.org/abs/1907.10016☆30Aug 19, 2019Updated 6 years ago
- [ICLR 2025] MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation Experts☆264Oct 16, 2024Updated last year
- ☆29May 4, 2024Updated last year
- 🍼 Official implementation of Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts☆41Sep 29, 2024Updated last year
- [EMNLP 2025] Circuit-Aware Editing Enables Generalizable Knowledge Learners☆19Nov 17, 2025Updated 4 months ago
- Expanding linear RNN state-transition matrix eigenvalues to include negatives improves state-tracking tasks and language modeling without…☆21Mar 15, 2025Updated last year
- Implementation of FuseMoE for FlexiModal Fusion, NeurIPS'24☆33Oct 12, 2025Updated 5 months ago
- LoPA: Scaling dLLM Inference via Lookahead Parallel Decoding☆36Jan 16, 2026Updated 2 months ago
- Fully open reproduction of DeepSeek-R1☆11Mar 24, 2025Updated 11 months ago
- ☆14Sep 1, 2025Updated 6 months ago
- Code and dataset for NAACL 2022 paper "CoSIm: Commonsense Reasoning for Counterfactual Scene Imagination" Hyounghun Kim, Abhay Zala, Mohi…☆16Nov 26, 2022Updated 3 years ago
- Code for the paper: Rehearsal-free Continual Language Learning via Efficient Parameter Isolation☆12May 16, 2023Updated 2 years ago
- Plug-and-Play Document Modules for Pre-trained Models☆25May 28, 2023Updated 2 years ago
- A fast MoE impl for PyTorch☆1,846Feb 10, 2025Updated last year
- Source code for SWIFT, an efficient reward model.☆19Jan 13, 2026Updated 2 months ago
- dynamic planning, hybrid models, hierarchical active inference, tool use☆13Jun 13, 2025Updated 9 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆227Nov 4, 2025Updated 4 months ago
- ☆17Jun 14, 2024Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Jan 17, 2024Updated 2 years ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,667Mar 8, 2024Updated 2 years ago
- Your finetuned model's back to its original safety standards faster than you can say "SafetyLock"!☆11Oct 16, 2024Updated last year
- Code for the pubblication "Distilled Replay: Overcoming Forgetting through Synthetic Examples"☆12Apr 1, 2021Updated 4 years ago