OpenSparseLLMs / LLaMA-MoE-v2
π LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training
β80Updated 4 months ago
Alternatives and similar repositories for LLaMA-MoE-v2:
Users that are interested in LLaMA-MoE-v2 are comparing it to the libraries listed below
- The official repository for the paper "Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal ReAsoning Benchmark"β51Updated this week
- β74Updated this week
- TokenSkip: Controllable Chain-of-Thought Compression in LLMsβ133Updated last month
- Open-Pandora: On-the-fly Control Video Generationβ34Updated 4 months ago
- [EMNLP 2024 Findingsπ₯] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context Inβ¦β92Updated 5 months ago
- β76Updated last week
- M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoningβ56Updated 4 months ago
- β60Updated this week
- CoT-Valve: Length-Compressible Chain-of-Thought Tuningβ65Updated 2 months ago
- [ICLR 2025] The official pytorch implement of "Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Contβ¦β31Updated 4 months ago
- Code and data for "Timo: Towards Better Temporal Reasoning for Language Models" (COLM 2024)β21Updated 6 months ago
- β41Updated 2 weeks ago
- β39Updated last month
- β72Updated 10 months ago
- The official code repository for PRMBench.β72Updated 2 months ago
- A Self-Training Framework for Vision-Language Reasoningβ76Updated 3 months ago
- This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"β33Updated 9 months ago
- Codes for Merging Large Language Modelsβ29Updated 8 months ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encodingβ44Updated 4 months ago
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.β61Updated 5 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*β100Updated last month
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)β35Updated last year
- Model merging is a highly efficient approach for long-to-short reasoning.β42Updated 3 weeks ago
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibratiβ¦β36Updated 9 months ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.β72Updated 5 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteriaβ69Updated 6 months ago
- β99Updated 9 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":β36Updated last year
- β73Updated 3 months ago
- Code for "A Sober Look at Progress in Language Model Reasoning" paperβ36Updated last week