SuperBruceJia / Awesome-Mixture-of-ExpertsLinks
Awesome Mixture of Experts (MoE): A Curated List of Mixture of Experts (MoE) and Mixture of Multimodal Experts (MoME)
☆41Updated 2 weeks ago
Alternatives and similar repositories for Awesome-Mixture-of-Experts
Users that are interested in Awesome-Mixture-of-Experts are comparing it to the libraries listed below
Sorting:
- ☆129Updated 7 months ago
- Awesome Low-Rank Adaptation☆48Updated 2 months ago
- [CSUR 2025] Continual Learning of Large Language Models: A Comprehensive Survey☆464Updated 5 months ago
- Must-read Papers on Large Language Model (LLM) Continual Learning☆146Updated last year
- ☆160Updated last year
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆92Updated 11 months ago
- Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. arXiv:2408.07666.☆563Updated this week
- AdaMoLE: Adaptive Mixture of LoRA Experts☆37Updated last year
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆227Updated 10 months ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆83Updated last year
- ☆119Updated last year
- Survey: A collection of AWESOME papers and resources on the latest research in Mixture of Experts.☆135Updated last year
- Awesome-Low-Rank-Adaptation☆116Updated last year
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆191Updated last year
- This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language …☆119Updated 5 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆69Updated 7 months ago
- ☆54Updated 10 months ago
- Automatically update arXiv papers about LLM Reasoning, LLM Evaluation, LLM & MLLM and Video Understanding using Github Actions.☆118Updated last week
- ☆171Updated 5 months ago
- [TMLR 2025] Efficient Reasoning Models: A Survey☆271Updated this week
- [ACM Computing Surveys 2025] This repository collects awesome survey, resource, and paper for Lifelong Learning with Large Language Model…☆149Updated 4 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆132Updated 6 months ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆44Updated 3 months ago
- [TKDE'25] The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".☆435Updated 2 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆135Updated 3 months ago
- [ICCV 2025] Auto Interpretation Pipeline and many other functionalities for Multimodal SAE Analysis.☆158Updated 3 weeks ago
- ☆127Updated 4 months ago
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆79Updated last year
- Survey of Small Language Models from Penn State, ...☆204Updated last week
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆125Updated 7 months ago