OpenSparseLLMs / LLaMA-MoE-v2Links
π LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training
β86Updated 6 months ago
Alternatives and similar repositories for LLaMA-MoE-v2
Users that are interested in LLaMA-MoE-v2 are comparing it to the libraries listed below
Sorting:
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shapingβ47Updated last month
- β85Updated 2 months ago
- [EMNLP 2024 Findingsπ₯] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Inβ¦β96Updated 7 months ago
- β46Updated 2 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuningβ73Updated 4 months ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoningβ60Updated 6 months ago
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentationβ69Updated 3 weeks ago
- ACL'2025: SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs. and preprint: SoftCoT++: Test-Time Scaling with Soft Chain-ofβ¦β28Updated 3 weeks ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":β39Updated last year
- β104Updated 2 weeks ago
- Open-Pandora: On-the-fly Control Video Generationβ34Updated 6 months ago
- [ACL' 25] The official code repository for PRMBench: A Fine-grained and Challenging Benchmark for Process-Level Reward Models.β73Updated 4 months ago
- β74Updated last year
- β78Updated 5 months ago
- This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"β35Updated 11 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMsβ156Updated 3 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.β63Updated 7 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*β104Updated 3 weeks ago
- Model merging is a highly efficient approach for long-to-short reasoning.β65Updated 3 weeks ago
- A Self-Training Framework for Vision-Language Reasoningβ80Updated 5 months ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Modelsβ123Updated 2 months ago
- [ICLR 2025] The official pytorch implement of "Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Contβ¦β42Updated 6 months ago
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.β191Updated this week
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteriaβ69Updated 8 months ago
- Official repository for paper "DeepCritic: Deliberate Critique with Large Language Models"β30Updated last month
- Code release for VTW (AAAI 2025) Oralβ43Updated 5 months ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)β40Updated last year
- A Sober Look at Language Model Reasoningβ74Updated last week
- β116Updated 3 weeks ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Modelsβ109Updated 4 months ago