XueFuzhao / awesome-mixture-of-experts
A collection of AWESOME things about mixture-of-experts
☆1,113Updated 5 months ago
Alternatives and similar repositories for awesome-mixture-of-experts
Users that are interested in awesome-mixture-of-experts are comparing it to the libraries listed below
Sorting:
- A curated reading list of research in Mixture-of-Experts(MoE).☆623Updated 6 months ago
- [TMLR 2024] Efficient Large Language Models: A Survey☆1,151Updated last month
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,102Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,526Updated last year
- The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".☆348Updated 2 months ago
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆742Updated last year
- A fast MoE impl for PyTorch☆1,720Updated 3 months ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆960Updated 5 months ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support DeepSeek FP8/FP4☆820Updated this week
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆725Updated last week
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆840Updated last week
- This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicit…☆1,022Updated 2 months ago
- Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. arXiv:2408.07666.☆384Updated this week
- A curated list for Efficient Large Language Models☆1,651Updated 3 weeks ago
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3.☆1,220Updated last week
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆621Updated 3 months ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆331Updated 11 months ago
- 🚀 Efficient implementations of state-of-the-art linear attention models in Torch and Triton☆2,380Updated this week
- Paper list about multimodal and large language models, only used to record papers I read in the daily arxiv for personal needs.☆621Updated this week
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆324Updated last year
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆1,011Updated 7 months ago
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆1,187Updated 10 months ago
- A bibliography and survey of the papers surrounding o1☆1,192Updated 6 months ago
- Paper List for In-context Learning 🌷☆853Updated 7 months ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆608Updated last year
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆894Updated 3 months ago
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,162Updated last year
- 📰 Must-read papers and blogs on LLM based Long Context Modeling 🔥☆1,477Updated last week
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation☆781Updated 7 months ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆443Updated 6 months ago