DavidFanzz / SCMoELinks
☆28Updated last year
Alternatives and similar repositories for SCMoE
Users that are interested in SCMoE are comparing it to the libraries listed below
Sorting:
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆89Updated 10 months ago
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆44Updated last year
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆136Updated 2 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆86Updated 7 months ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆189Updated last year
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆120Updated 6 months ago
- [EMNLP 2025] TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆182Updated 3 months ago
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆101Updated 10 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆222Updated 10 months ago