codecaution / Awesome-Mixture-of-Experts-PapersLinks
A curated reading list of research in Mixture-of-Experts(MoE).
☆654Updated last year
Alternatives and similar repositories for Awesome-Mixture-of-Experts-Papers
Users that are interested in Awesome-Mixture-of-Experts-Papers are comparing it to the libraries listed below
Sorting:
- A collection of AWESOME things about mixture-of-experts☆1,244Updated last year
- [TKDE'25] The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".☆468Updated 5 months ago
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆838Updated 2 years ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆951Updated last week
- [TMLR 2024] Efficient Large Language Models: A Survey☆1,240Updated 6 months ago
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,212Updated last year
- A fast MoE impl for PyTorch☆1,825Updated 10 months ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆363Updated 2 years ago
- Survey Paper List - Efficient LLM and Foundation Models☆259Updated last year
- ☆693Updated 3 weeks ago
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆1,061Updated 2 weeks ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,004Updated last year
- Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. arXiv:2408.07666.☆627Updated this week
- Fast inference from large lauguage models via speculative decoding☆872Updated last year
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)☆541Updated 3 years ago
- Awesome list for LLM pruning.☆279Updated 2 months ago
- [CSUR 2025] Continual Learning of Large Language Models: A Comprehensive Survey☆488Updated this week
- Must-read Papers of Parameter-Efficient Tuning (Delta Tuning) Methods on Pre-trained Models.☆286Updated 2 years ago
- A curated list for Efficient Large Language Models☆1,920Updated 6 months ago
- ☆216Updated last month
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆391Updated last year
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆142Updated 4 months ago
- a curated list of high-quality papers on resource-efficient LLMs 🌱☆152Updated 9 months ago
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆344Updated 8 months ago
- A simple and effective LLM pruning approach.☆827Updated last year
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.☆494Updated last year
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆405Updated 5 months ago
- This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicit…☆1,230Updated 9 months ago
- Explorations into some recent techniques surrounding speculative decoding☆295Updated last year
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆374Updated last year