withinmiaov / A-Survey-on-Mixture-of-Experts-in-LLMs
The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".
☆207Updated last week
Alternatives and similar repositories for A-Survey-on-Mixture-of-Experts-in-LLMs:
Users that are interested in A-Survey-on-Mixture-of-Experts-in-LLMs are comparing it to the libraries listed below
- Survey Paper List - Efficient LLM and Foundation Models☆238Updated 4 months ago
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆276Updated 2 weeks ago
- Awesome list for LLM pruning.☆194Updated last month
- Awesome list for LLM quantization☆160Updated last month
- Awesome-Low-Rank-Adaptation☆64Updated 3 months ago
- ☆122Updated 6 months ago
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆218Updated 3 months ago
- ☆137Updated 4 months ago
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆61Updated last month
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆157Updated last month
- ☆171Updated 3 months ago
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆273Updated 9 months ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆70Updated 7 months ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆120Updated 3 weeks ago
- A Telegram bot to recommend arXiv papers☆237Updated 3 weeks ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆124Updated 5 months ago
- Continual Learning of Large Language Models: A Comprehensive Survey☆322Updated 3 weeks ago
- A curated list of Model Merging methods.☆89Updated 4 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆145Updated 7 months ago
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆567Updated this week
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆77Updated 8 months ago
- Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. arXiv:2408.07666.☆292Updated last week
- Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Models☆41Updated 2 months ago
- ☆186Updated last year
- A curated reading list of research in Mixture-of-Experts(MoE).☆558Updated 3 months ago
- a curated list of high-quality papers on resource-efficient LLMs 🌱☆95Updated 3 weeks ago
- Accepted LLM Papers in NeurIPS 2024☆33Updated 3 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆123Updated 9 months ago
- ☆214Updated 7 months ago
- Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?☆91Updated 3 months ago