Implementation for the paper: CMoE: Fast Carving of Mixture-of-Experts for Efficient LLM Inference
☆37Mar 6, 2025Updated last year
Alternatives and similar repositories for CMoE
Users that are interested in CMoE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- MoE-Visualizer is a tool designed to visualize the selection of experts in Mixture-of-Experts (MoE) models.☆16Apr 8, 2025Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆34Aug 14, 2024Updated last year
- ☆21Oct 2, 2024Updated last year
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆159Jul 9, 2025Updated 9 months ago
- SCOPE: Self-evolving Context Optimization via Prompt Evolution - A framework for automatic prompt optimization☆73Mar 26, 2026Updated 3 weeks ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models