Callione / LLaVA-MOSS2Links
Modified LLaVA framework for MOSS2, and makes MOSS2 a multimodal model.
☆13Updated last year
Alternatives and similar repositories for LLaVA-MOSS2
Users that are interested in LLaVA-MOSS2 are comparing it to the libraries listed below
Sorting:
- This is for ACL 2025 Findings Paper: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with Multi-modalitiesModels☆89Updated last month
- HumanOmni☆216Updated 11 months ago
- Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)☆76Updated 10 months ago
- (NIPS 2025) OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Align…☆124Updated 3 months ago
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆122Updated last year
- EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via Reinforcement Learning [🔥The Exploration of R1 for General Audio-Vi…☆73Updated 8 months ago
- ☆60Updated last year
- Synth-Empathy: Towards High-Quality Synthetic Empathy Data☆18Updated 11 months ago
- MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources☆215Updated 4 months ago
- Official repository of MMDU dataset☆103Updated last year
- Baichuan-Omni: Towards Capable Open-source Omni-modal LLM 🌊☆272Updated last year
- The Next Step Forward in Multimodal LLM Alignment☆197Updated 9 months ago
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey☆477Updated last year
- ☆22Updated last year
- Video Chain of Thought, Codes for ICML 2024 paper: "Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition"☆178Updated 11 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆306Updated last year
- ✨First Open-Source R1-like Video-LLM [2025/02/18]☆381Updated 11 months ago
- [ACL'2024 Findings] GAOKAO-MM: A Chinese Human-Level Benchmark for Multimodal Models Evaluation☆76Updated last year
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆162Updated last year
- [ICLR 2026] "VideoReasonBench: Can MLLMs Perform Vision-Centric Complex Video Reasoning?", Yuanxin Liu, Kun Ouyang, Haoning Wu, Yi Liu, L…☆37Updated last week
- [NAACL 2024] LaDiC: Are Diffusion Models Really Inferior to Autoregressive Counterparts for Image-to-text Generation?☆43Updated last year
- Latest Advances on Reasoning of Multimodal Large Language Models (Multimodal R1 \ Visual R1) ) 🍓☆36Updated 10 months ago
- [ICLR 2026] On the Generalization of SFT: A Reinforcement Learning Perspective with Reward Rectification.☆532Updated last month
- (ICCV2025) Official repository of paper "ViSpeak: Visual Instruction Feedback in Streaming Videos"☆45Updated 7 months ago
- ☆37Updated last year
- [ICLR 2025] A Comprehensive Framework for Developing and Evaluating Multimodal Role-Playing Agents☆90Updated last week
- R1-like Video-LLM for Temporal Grounding☆133Updated 7 months ago
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆38Updated 2 weeks ago
- [ACM MM 2025] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆103Updated 2 months ago
- LMM solved catastrophic forgetting, AAAI2025☆45Updated 9 months ago