Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs
☆23Nov 11, 2025Updated 5 months ago
Alternatives and similar repositories for EEP
Users that are interested in EEP are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆16Feb 15, 2025Updated last year
- Official PyTorch implementation of CD-MOE☆12Mar 18, 2026Updated 3 weeks ago
- The official code of paper "OMS-DPM: Optimizing Model Schedule for Diffusion Probabilistic Model" accepted by ICML 2023☆24Oct 11, 2023Updated 2 years ago
- [NeurIPS 2024] Can LLMs Learn by Teaching for Better Reasoning? A Preliminary Study☆59Nov 24, 2024Updated last year
- [NAACL'25 🏆 SAC Award] Official code for "Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert…☆16Feb 4, 2025Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- [NeurIPS'25] The official code implementation for paper "R2R: Efficiently Navigating Divergent Reasoning Paths with Small-Large Model Tok…☆86Updated this week
- [WSDM'24 Oral] The official implementation of paper <DeSCo: Towards Generalizable and Scalable Deep Subgraph Counting>☆23Mar 11, 2024Updated 2 years ago
- Code Repository of Evaluating Quantized Large Language Models☆134Sep 8, 2024Updated last year
- [DATE'23] The official code for paper <CLAP: Locality Aware and Parallel Triangle Counting with Content Addressable Memory>☆23Mar 16, 2026Updated 3 weeks ago
- [ICLR 2025] Distilled Decoding 1: One-step Sampling of Image Auto-regressive Models with Flow Matching☆19Apr 21, 2025Updated 11 months ago
- [ICLR 2025, IEEE TPAMI 2026] Mixture Compressor & MC#☆70Feb 12, 2025Updated last year
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆117May 24, 2024Updated last year
- [CoLM'25] The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆157Jan 14, 2026Updated 2 months ago
- AIMO2 2nd place solution☆74May 28, 2025Updated 10 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆14Aug 9, 2024Updated last year
- [ICLR'25] ARB-LLM: Alternating Refined Binarizations for Large Language Models☆28Aug 5, 2025Updated 8 months ago
- D^2-MoE: Delta Decompression for MoE-based LLMs Compression☆79Mar 25, 2025Updated last year
- Official implementation for LaCo (EMNLP 2024 Findings)☆21Oct 3, 2024Updated last year
- [ICLR 2025] Official implementation of paper "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆24Mar 16, 2025Updated last year
- [ICLR 2023] 'Revisiting Pruning At Initialization Through The Lens of Ramanujan Graph" by Duc Hoang, Shiwei Liu, Radu Marculescu, Atlas W…☆14Aug 4, 2023Updated 2 years ago
- Greedy Bayesian Posterior Approximation with Deep Ensembles. A. Tiulpin and M. B. Blaschko. (2021)☆11Jul 18, 2022Updated 3 years ago
- Code of ICCV23 paper: Ada3D : Exploiting the Spatial Redundancy with Adaptive Inference for Efficient 3D Object Detection☆29Oct 16, 2023Updated 2 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Feb 28, 2023Updated 3 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- [ICLR 2023] "Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!" Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen…☆28Aug 29, 2023Updated 2 years ago
- Code to reproduce the experiments of the ICLR24-paper: "Sparse Model Soups: A Recipe for Improved Pruning via Model Averaging"☆12Oct 14, 2025Updated 5 months ago
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆37Feb 20, 2023Updated 3 years ago
- The official implementation of the paper "Towards Efficient Mixture of Experts: A Holistic Study of Compression Techniques (TMLR)".☆91Feb 28, 2026Updated last month
- This repository presents the source code for the paper "MILLION: Mastering Long-Context LLM Inference Via Outlier-Immunized KV Product Qu…☆23Apr 2, 2025Updated last year
- Learnable Graph Discovery☆10May 17, 2019Updated 6 years ago
- ☆23Nov 26, 2024Updated last year
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆81Jul 7, 2025Updated 9 months ago
- Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning [Accepted at ICML 2023]☆14Mar 31, 2024Updated 2 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆84Mar 17, 2025Updated last year
- [Neurips 2021] Sparse Training via Boosting Pruning Plasticity with Neuroregeneration☆31Feb 11, 2023Updated 3 years ago
- [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"☆39Aug 20, 2024Updated last year
- ☆29May 24, 2024Updated last year
- Mixture of Experts from scratch☆13Apr 12, 2024Updated 2 years ago
- [ECCV24] MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization☆14Nov 27, 2024Updated last year
- Domain Adaptation through Synthesis☆10Dec 15, 2018Updated 7 years ago