OpenSparseLLMs / LLaMA-MoE-v2Links
π LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training
β86Updated 6 months ago
Alternatives and similar repositories for LLaMA-MoE-v2
Users that are interested in LLaMA-MoE-v2 are comparing it to the libraries listed below
Sorting:
- β83Updated last month
- [EMNLP 2024 Findingsπ₯] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Inβ¦β97Updated 6 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMsβ145Updated 2 months ago
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shapingβ41Updated 2 weeks ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuningβ69Updated 3 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Modelsβ97Updated 3 months ago
- β93Updated 2 weeks ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoningβ60Updated 5 months ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)β38Updated last year
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*β103Updated last week
- β138Updated 10 months ago
- β89Updated last week
- [arXiv 2025] Efficient Reasoning Models: A Surveyβ166Updated last week
- Model merging is a highly efficient approach for long-to-short reasoning.β56Updated this week
- [ICLR 2025] The official pytorch implement of "Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Contβ¦β40Updated 6 months ago
- The official code repository for PRMBench.β73Updated 3 months ago
- Open-Pandora: On-the-fly Control Video Generationβ34Updated 6 months ago
- β47Updated 2 months ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!β53Updated 2 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":β38Updated last year
- A Self-Training Framework for Vision-Language Reasoningβ80Updated 4 months ago
- β105Updated 10 months ago
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentationβ64Updated 2 weeks ago
- A Sober Look at Language Model Reasoningβ52Updated last week
- β45Updated last month
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Modelsβ112Updated last month
- β77Updated 4 months ago
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cacheβ¦β72Updated this week
- Code release for VTW (AAAI 2025) Oralβ43Updated 4 months ago
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"β50Updated 10 months ago