codecaution / Awesome-Mixture-of-Experts-Papers
A curated reading list of research in Mixture-of-Experts(MoE).
☆555Updated 2 months ago
Alternatives and similar repositories for Awesome-Mixture-of-Experts-Papers:
Users that are interested in Awesome-Mixture-of-Experts-Papers are comparing it to the libraries listed below
- A collection of AWESOME things about mixture-of-experts☆1,026Updated last month
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆668Updated last year
- [TMLR 2024] Efficient Large Language Models: A Survey☆1,073Updated this week
- Tutel MoE: An Optimized Mixture-of-Experts Implementation☆746Updated this week
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)☆520Updated 2 years ago
- A curated list for Efficient Large Language Models☆1,393Updated 2 weeks ago
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,018Updated 8 months ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆908Updated last month
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆558Updated this week
- The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".☆200Updated this week
- A fast MoE impl for PyTorch☆1,596Updated 6 months ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆288Updated last year
- Paper List for In-context Learning 🌷☆827Updated 3 months ago
- Survey Paper List - Efficient LLM and Foundation Models☆239Updated 3 months ago
- Awesome list for LLM pruning.☆192Updated last month
- Must-read Papers of Parameter-Efficient Tuning (Delta Tuning) Methods on Pre-trained Models.☆280Updated last year
- awesome papers in LLM interpretability☆378Updated this week
- ☆596Updated this week
- A simple and effective LLM pruning approach.☆705Updated 5 months ago
- Fast inference from large lauguage models via speculative decoding☆630Updated 4 months ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆578Updated 10 months ago
- This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicit…☆732Updated 2 months ago
- 📰 Must-read papers and blogs on LLM based Long Context Modeling 🔥☆1,166Updated this week
- Rotary Transformer☆858Updated 2 years ago
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆270Updated 8 months ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆297Updated 7 months ago
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆928Updated 3 months ago
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆308Updated last week
- Awesome LLM compression research papers and tools.☆1,317Updated this week
- Large Context Attention☆670Updated 5 months ago