XueFuzhao / awesome-mixture-of-experts
A collection of AWESOME things about mixture-of-experts
☆998Updated last week
Alternatives and similar repositories for awesome-mixture-of-experts:
Users that are interested in awesome-mixture-of-experts are comparing it to the libraries listed below
- A curated reading list of research in Mixture-of-Experts(MoE).☆546Updated last month
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆1,009Updated 7 months ago
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆655Updated last year
- [TMLR 2024] Efficient Large Language Models: A Survey☆1,047Updated 3 weeks ago
- This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicit…☆690Updated last month
- Tutel MoE: An Optimized Mixture-of-Experts Implementation☆740Updated 3 weeks ago
- A curated list for Efficient Large Language Models☆1,329Updated last week
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆565Updated 9 months ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆890Updated last week
- Awesome LLM compression research papers and tools.☆1,251Updated this week
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆518Updated 2 weeks ago
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)☆517Updated 2 years ago
- 📰 Must-read papers and blogs on LLM based Long Context Modeling 🔥☆1,065Updated this week
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,409Updated 9 months ago
- A simple and effective LLM pruning approach.☆686Updated 4 months ago
- Must-read Papers of Parameter-Efficient Tuning (Delta Tuning) Methods on Pre-trained Models.☆279Updated last year
- Paper List for In-context Learning 🌷☆825Updated 2 months ago
- Paper list about multimodal and large language models, only used to record papers I read in the daily arxiv for personal needs.☆564Updated this week
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,089Updated 9 months ago
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆900Updated 2 months ago
- Codebase for Merging Language Models (ICML 2024)☆783Updated 7 months ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆386Updated last month
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆604Updated 4 months ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆281Updated last year
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆294Updated 6 months ago
- A bibliography and survey of the papers surrounding o1☆920Updated last month
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆764Updated this week
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,161Updated 2 months ago
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆1,420Updated this week
- Continual Learning of Large Language Models: A Comprehensive Survey☆286Updated last week