codecaution / Awesome-Mixture-of-Experts-PapersLinks
A curated reading list of research in Mixture-of-Experts(MoE).
☆635Updated 7 months ago
Alternatives and similar repositories for Awesome-Mixture-of-Experts-Papers
Users that are interested in Awesome-Mixture-of-Experts-Papers are comparing it to the libraries listed below
Sorting:
- A collection of AWESOME things about mixture-of-experts☆1,143Updated 6 months ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support DeepSeek FP8/FP4☆844Updated this week
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)☆534Updated 3 years ago
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,120Updated last year
- The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".☆372Updated this week
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆766Updated last year
- [TMLR 2024] Efficient Large Language Models: A Survey☆1,172Updated this week
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆800Updated last week
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆967Updated 6 months ago
- A fast MoE impl for PyTorch☆1,746Updated 4 months ago
- Must-read Papers of Parameter-Efficient Tuning (Delta Tuning) Methods on Pre-trained Models.☆286Updated last year
- Survey Paper List - Efficient LLM and Foundation Models☆248Updated 9 months ago
- Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. arXiv:2408.07666.☆453Updated this week
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆332Updated 2 years ago
- Paper List for In-context Learning 🌷☆854Updated 8 months ago
- ☆644Updated this week
- awesome papers in LLM interpretability☆495Updated this week
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆341Updated last year
- Awesome list for LLM pruning.☆232Updated 6 months ago
- A curated list for Efficient Large Language Models☆1,746Updated last week
- Fast inference from large lauguage models via speculative decoding☆762Updated 10 months ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆456Updated 8 months ago
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆348Updated last year
- Large Context Attention☆716Updated 5 months ago
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆1,026Updated 8 months ago
- A collection of phenomenons observed during the scaling of big foundation models, which may be developed into consensus, principles, or l…☆282Updated last year
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆617Updated last year
- Best practice for training LLaMA models in Megatron-LM☆656Updated last year
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆282Updated 2 months ago
- Ring attention implementation with flash attention☆789Updated last week