Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"
☆70Jul 30, 2024Updated last year
Alternatives and similar repositories for Dynamic_MoE
Users that are interested in Dynamic_MoE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Entropy-Driven GRPO with Guided Error Correction for Advantage Diversity☆22Aug 28, 2025Updated 7 months ago
- ☆17Jun 11, 2025Updated 10 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆61Feb 7, 2025Updated last year
- Efficient Mixture of Experts for LLM Paper List☆175Sep 28, 2025Updated 6 months ago
- Self Reproduction Code of Paper "Reducing Transformer Key-Value Cache Size with Cross-Layer Attention (MIT CSAIL)☆17May 24, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- ☆134Jun 6, 2025Updated 10 months ago
- This is code for the EMNLP 2022 Paper "UniRPG: Unified Discrete Reasoning over Table and Text as Program Generation".☆10Apr 30, 2023Updated 2 years ago
- [NAACL'25 🏆 SAC Award] Official code for "Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert…☆16Feb 4, 2025Updated last year
- [ACL'24] MC^2: A Multilingual Corpus of Minority Languages in China (Tibetan, Uyghur, Kazakh, and Mongolian)☆32Jan 17, 2026Updated 2 months ago
- Use the tokenizer in parallel to achieve superior acceleration☆20Mar 21, 2024Updated 2 years ago
- [ACL2025 Best Paper] Language Models Resist Alignment☆47Jun 11, 2025Updated 10 months ago
- Optimizing Anytime Reasoning via Budget Relative Policy Optimization☆54Jul 15, 2025Updated 8 months ago
- ☆12May 20, 2025Updated 10 months ago
- [TKDE'25] The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".☆487Jul 23, 2025Updated 8 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- ☆19Nov 5, 2024Updated last year
- [ECCV 2024] Official pytorch implementation of "Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts"☆47Jul 4, 2024Updated last year
- ☆17May 17, 2022Updated 3 years ago
- SysBench: Can Large Language Models Follow System Messages?☆40Sep 4, 2024Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,000Dec 6, 2024Updated last year
- [AAAI 2024] History Matters: Temporal Knowledge Editing in Large Language Model☆14Dec 17, 2023Updated 2 years ago
- 🍼 Official implementation of Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts☆41Sep 29, 2024Updated last year
- [EMNLP 2025] Circuit-Aware Editing Enables Generalizable Knowledge Learners☆19Nov 17, 2025Updated 4 months ago
- ☆29May 4, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- Expanding linear RNN state-transition matrix eigenvalues to include negatives improves state-tracking tasks and language modeling without…☆21Mar 15, 2025Updated last year
- LoPA: Scaling dLLM Inference via Lookahead Parallel Decoding☆36Updated this week
- Fully open reproduction of DeepSeek-R1☆11Mar 24, 2025Updated last year
- ☆18Jun 23, 2025Updated 9 months ago
- Code for the paper: Rehearsal-free Continual Language Learning via Efficient Parameter Isolation☆12May 16, 2023Updated 2 years ago
- Source code for SWIFT, an efficient reward model.☆20Jan 13, 2026Updated 2 months ago
- [NeurIPS 2025] Watch and Listen: Understanding Audio-Visual-Speech Moments with Multimodal LLM☆23Feb 10, 2026Updated 2 months ago
- Code Release for the 2023 NeurIPS Paper How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained langua…☆17Dec 6, 2024Updated last year
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆230Nov 4, 2025Updated 5 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Jan 17, 2024Updated 2 years ago
- Your finetuned model's back to its original safety standards faster than you can say "SafetyLock"!☆11Oct 16, 2024Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,675Mar 8, 2024Updated 2 years ago
- ☆16Apr 14, 2021Updated 4 years ago
- Official code implementation for our paper -- Direct Inversion: Optimization-Free Text-Driven Real Image Editing with Diffusion Models.☆27Nov 18, 2022Updated 3 years ago
- Paper list for the paper "Authorship Attribution in the Era of Large Language Models: Problems, Methodologies, and Challenges (SIGKDD Exp…☆19Apr 5, 2026Updated last week
- implement a RNN model of DSTC2 task☆16Jan 25, 2019Updated 7 years ago