codecaution / Awesome-Mixture-of-Experts-Papers
A curated reading list of research in Mixture-of-Experts(MoE).
☆600Updated 4 months ago
Alternatives and similar repositories for Awesome-Mixture-of-Experts-Papers:
Users that are interested in Awesome-Mixture-of-Experts-Papers are comparing it to the libraries listed below
- A collection of AWESOME things about mixture-of-experts☆1,074Updated 3 months ago
- Tutel MoE: An Optimized Mixture-of-Experts Implementation☆786Updated this week
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆714Updated last year
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,079Updated 11 months ago
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)☆523Updated 3 years ago
- The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".☆303Updated last week
- [TMLR 2024] Efficient Large Language Models: A Survey☆1,121Updated 3 weeks ago
- A fast MoE impl for PyTorch☆1,682Updated last month
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆654Updated this week
- Survey Paper List - Efficient LLM and Foundation Models☆240Updated 6 months ago
- A curated list for Efficient Large Language Models☆1,547Updated last week
- Awesome list for LLM pruning.☆212Updated 3 months ago
- Must-read Papers of Parameter-Efficient Tuning (Delta Tuning) Methods on Pre-trained Models.☆281Updated last year
- Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. arXiv:2408.07666.☆342Updated this week
- ☆613Updated 2 months ago
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.☆431Updated 7 months ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆936Updated 3 months ago
- Microsoft Automatic Mixed Precision Library☆581Updated 5 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆452Updated this week
- Paper List for In-context Learning 🌷☆849Updated 5 months ago
- 🚀 Efficient implementations of state-of-the-art linear attention models in Torch and Triton☆2,144Updated this week
- A simple and effective LLM pruning approach.☆725Updated 7 months ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆321Updated 9 months ago
- Rotary Transformer☆916Updated 3 years ago
- Ring attention implementation with flash attention☆714Updated last month
- Large Context Attention☆693Updated 2 months ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆597Updated last year
- Fast inference from large lauguage models via speculative decoding☆692Updated 7 months ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆305Updated last year
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆400Updated 5 months ago