UNITES-Lab / MC-SMoEView external linksLinks
[ICLR‘24 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"
☆103Jun 20, 2025Updated 7 months ago
Alternatives and similar repositories for MC-SMoE
Users that are interested in MC-SMoE are comparing it to the libraries listed below
Sorting:
- Official code for the paper "HEXA-MoE: Efficient and Heterogeneous-Aware MoE Acceleration with Zero Computation Redundancy"☆15Mar 6, 2025Updated 11 months ago
- ☆21May 2, 2025Updated 9 months ago
- Official code for the paper "Examining Post-Training Quantization for Mixture-of-Experts: A Benchmark"☆29Jun 30, 2025Updated 7 months ago
- [NAACL'25 🏆 SAC Award] Official code for "Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert…☆14Feb 4, 2025Updated last year
- ☆30Sep 28, 2023Updated 2 years ago
- Scaling Sparse Fine-Tuning to Large Language Models☆18Jan 31, 2024Updated 2 years ago
- Official PyTorch implementation of CD-MOE☆12Mar 29, 2025Updated 10 months ago
- [ICML 2024] Code for the paper "MoE-RBench: Towards Building Reliable Language Models with Sparse Mixture-of-Experts"☆10Jul 1, 2024Updated last year
- [ICML‘25] Official code for paper "Occult: Optimizing Collaborative Communication across Experts for Accelerated Parallel MoE Training an…☆12Apr 17, 2025Updated 9 months ago
- An extention to the GaLore paper, to perform Natural Gradient Descent in low rank subspace☆18Oct 21, 2024Updated last year
- ☆30Jul 22, 2024Updated last year
- [ECCV 2024] Code for the paper "Mew: Multiplexed Immunofluorescence Image Analysis through an Efficient Multiplex Network"☆17Jul 27, 2024Updated last year
- ☆18Aug 19, 2024Updated last year
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Apr 21, 2025Updated 9 months ago
- ☆273Oct 31, 2023Updated 2 years ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆201Jul 17, 2024Updated last year
- The official implementation of the paper "Towards Efficient Mixture of Experts: A Holistic Study of Compression Techniques (TMLR)".☆88Mar 19, 2025Updated 10 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆88Oct 22, 2024Updated last year
- ☆23Nov 26, 2024Updated last year
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆68Mar 7, 2024Updated last year
- ☆63Oct 17, 2023Updated 2 years ago
- ☆209Feb 3, 2024Updated 2 years ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆473Apr 21, 2024Updated last year
- ☆15Sep 11, 2025Updated 5 months ago
- ☆12May 20, 2025Updated 8 months ago
- [ICLR 2026] Official repo for "FrameThinker: Learning to Think with Long Videos via Multi-Turn Frame Spotlighting"☆37Oct 9, 2025Updated 4 months ago
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆114May 24, 2024Updated last year
- Official implementation for "Mixture of In-Context Experts Enhance LLMs’ Awareness of Long Contexts" (Accepted by Neurips2024)☆13Jan 7, 2025Updated last year
- ☆18Apr 16, 2025Updated 9 months ago
- The official repository for the experiments included in the paper titled "Patch-level Routing in Mixture-of-Experts is Provably Sample-ef…☆13May 22, 2023Updated 2 years ago
- Serializes RDF from a SPARQL endpoint to JSON-LD documents☆10Sep 11, 2018Updated 7 years ago
- ☆14Mar 11, 2025Updated 11 months ago
- The official implementation of HybridNorm: Towards Stable and Efficient Transformer Training via Hybrid Normalization☆18Mar 7, 2025Updated 11 months ago
- ☆208Jan 14, 2026Updated 3 weeks ago
- Code for "Reasoning to Learn from Latent Thoughts"☆124Mar 28, 2025Updated 10 months ago
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆55Oct 29, 2024Updated last year
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆80Jul 7, 2025Updated 7 months ago
- Longitudinal Evaluation of LLMs via Data Compression☆33May 29, 2024Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,003Dec 6, 2024Updated last year