[ACL 2026 Main] Analytical FFN-to-MoE Restructuring via Activation Pattern Analysis
☆38Apr 24, 2026Updated last week
Alternatives and similar repositories for CMoE
Users that are interested in CMoE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ASICON'25] User-friendly lithography simulation engine for full-chip scale mask optimization☆42Dec 17, 2025Updated 4 months ago
- MoE-Visualizer is a tool designed to visualize the selection of experts in Mixture-of-Experts (MoE) models.☆16Apr 8, 2025Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆35Aug 14, 2024Updated last year
- ☆21Oct 2, 2024Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆61Feb 7, 2025Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- The code of 《M4: Multi-Proxy Multi-Gate Mixture of Experts Network for Multiple Instance Learning in Histopathology Image Analysis》☆14Mar 31, 2025Updated last year
- [NAACL'25 🏆 SAC Award] Official code for "Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert…☆16Feb 4, 2025Updated last year
- CRAI is a multimodal large language model based on the Mixture of Experts (MoE) architecture, supporting text and image cross-modal tasks…☆16Apr 29, 2025Updated last year
- The code for "AttentionPredictor: Temporal Pattern Matters for Efficient LLM Inference", Qingyue Yang, Jie Wang, Xing Li, Zhihai Wang, Ch…☆28Jul 15, 2025Updated 9 months ago
- Scaling Laws for Mixture of Experts Models☆15Feb 25, 2025Updated last year
- [ICML 2025] Code for "R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts"☆19Mar 10, 2025Updated last year
- ☆35Nov 6, 2024Updated last year
- ☆13Feb 17, 2025Updated last year
- Code for "DAMEX: Dataset-aware Mixture-of-Experts for visual understanding of mixture-of-datasets", accepted at Neurips 2023 (Main confer…☆28Mar 29, 2024Updated 2 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Port of Facebook's LLaMA model in C/C++☆13Mar 19, 2023Updated 3 years ago
- ☆58Mar 30, 2026Updated last month
- 🎓Automatically Update LLM inference systems Papers Daily using Github Actions (Update Every 12th hours)☆12Updated this week
- [ICLR26] Beyond Real: Imaginary Extension of Rotary Position Embeddings for Long-Context LLMs☆33Dec 9, 2025Updated 4 months ago
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆150Mar 31, 2026Updated last month
- This is a fork of SGLang for hip-attention integration. Please refer to hip-attention for detail.☆18Mar 31, 2026Updated last month
- Low-Rank Llama Custom Training☆23Mar 27, 2024Updated 2 years ago
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆68Aug 15, 2025Updated 8 months ago
- Mixture-of-Experts Multimodal Variational Autoencoder☆15Jul 3, 2025Updated 10 months ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Official implementation of RMoE (Layerwise Recurrent Router for Mixture-of-Experts)☆30Aug 4, 2024Updated last year
- [ICLR 2025] Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization☆24Oct 5, 2025Updated 7 months ago
- The code for "MoPE: Mixture of Prefix Experts for Zero-Shot Dialogue State Tracking"☆19Jan 25, 2025Updated last year
- Milk-V Duo. Access to Internet throw USB RNDIS connection to host machine☆16Jan 11, 2024Updated 2 years ago
- This repository contains the code and released models for the paper Segmenting Text and Learning Their Rewards for Improved RLHF in Langu…☆19Jan 8, 2025Updated last year
- Official code for "Efficient Residual Learning with Mixture-of-Experts for Universal Dexterous Grasping" (ICLR 2025)☆28Oct 25, 2025Updated 6 months ago
- TokenSim is a tool for simulating the behavior of large language models (LLMs) in a distributed environment.☆22Sep 20, 2025Updated 7 months ago
- ☆145Jul 21, 2024Updated last year
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆296May 1, 2025Updated last year
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- ☆33Nov 11, 2024Updated last year
- LibMoE: A LIBRARY FOR COMPREHENSIVE BENCHMARKING MIXTURE OF EXPERTS IN LARGE LANGUAGE MODELS☆49Updated this week
- [ACL Findings 2026] Official Implementation of "FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acc…☆31Apr 14, 2026Updated 3 weeks ago
- Prototyp MegaScale-Infer: Serving Mixture-of-Experts at Scale with Disaggregated Expert Parallelism☆30Apr 4, 2025Updated last year
- ☆13Apr 9, 2026Updated 3 weeks ago
- Unofficial wheels for some machine-learning Python libraries, for the Nvidia Jetson Nano.☆17Aug 24, 2021Updated 4 years ago
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆23Nov 11, 2025Updated 5 months ago