withinmiaov / A-Survey-on-Mixture-of-Experts-in-LLMs
The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".
☆260Updated last month
Alternatives and similar repositories for A-Survey-on-Mixture-of-Experts-in-LLMs:
Users that are interested in A-Survey-on-Mixture-of-Experts-in-LLMs are comparing it to the libraries listed below
- Awesome list for LLM pruning.☆204Updated 2 months ago
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆72Updated 2 months ago
- Survey Paper List - Efficient LLM and Foundation Models☆241Updated 5 months ago
- Awesome list for LLM quantization☆175Updated 2 months ago
- Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. arXiv:2408.07666.☆326Updated this week
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆327Updated this week
- Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.☆220Updated 2 months ago
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆230Updated this week
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆165Updated 3 months ago
- Awesome-Low-Rank-Adaptation☆81Updated 4 months ago
- A curated list of Model Merging methods.☆90Updated 5 months ago
- A series of technical report on Slow Thinking with LLM☆438Updated this week
- ☆142Updated 5 months ago
- Efficient Multimodal Large Language Models: A Survey☆319Updated 6 months ago
- A Telegram bot to recommend arXiv papers☆249Updated 3 weeks ago
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆580Updated last month
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆609Updated this week
- [NeurIPS 24 Spotlight] MaskLLM: Learnable Semi-structured Sparsity for Large Language Models☆152Updated 2 months ago
- Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Models☆42Updated 3 months ago
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆350Updated last month
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆134Updated 6 months ago
- ☆125Updated 7 months ago
- a curated list of high-quality papers on resource-efficient LLMs 🌱☆104Updated last month
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆84Updated 9 months ago
- The related works and background techniques about Openai o1☆215Updated last month
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆148Updated 8 months ago
- A curated reading list of research in Mixture-of-Experts(MoE).☆590Updated 4 months ago