codecaution / Awesome-Mixture-of-Experts-Papers
A curated reading list of research in Mixture-of-Experts(MoE).
☆623Updated 6 months ago
Alternatives and similar repositories for Awesome-Mixture-of-Experts-Papers
Users that are interested in Awesome-Mixture-of-Experts-Papers are comparing it to the libraries listed below
Sorting:
- A collection of AWESOME things about mixture-of-experts☆1,113Updated 5 months ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support DeepSeek FP8/FP4☆820Updated this week
- A fast MoE impl for PyTorch☆1,720Updated 3 months ago
- [TMLR 2024] Efficient Large Language Models: A Survey☆1,151Updated last month
- The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".☆348Updated 2 months ago
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆725Updated last week
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆742Updated last year
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,102Updated last year
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)☆527Updated 3 years ago
- ☆633Updated 3 weeks ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆324Updated last year
- Survey Paper List - Efficient LLM and Foundation Models☆248Updated 7 months ago
- Awesome list for LLM pruning.☆224Updated 5 months ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆960Updated 5 months ago
- Fast inference from large lauguage models via speculative decoding☆723Updated 8 months ago
- Must-read Papers of Parameter-Efficient Tuning (Delta Tuning) Methods on Pre-trained Models.☆284Updated last year
- A curated list for Efficient Large Language Models☆1,651Updated 3 weeks ago
- Awesome LLM compression research papers and tools.