Callione / LLaVA-MOSS2Links
Modified LLaVA framework for MOSS2, and makes MOSS2 a multimodal model.
☆13Updated last year
Alternatives and similar repositories for LLaVA-MOSS2
Users that are interested in LLaVA-MOSS2 are comparing it to the libraries listed below
Sorting:
- This is for ACL 2025 Findings Paper: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with Multi-modalitiesModels☆66Updated last week
- (NIPS 2025) OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Align…☆109Updated last week
- Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)☆74Updated 8 months ago
- EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via Reinforcement Learning [🔥The Exploration of R1 for General Audio-Vi…☆63Updated 6 months ago
- MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources☆208Updated last month
- [ACL'2024 Findings] GAOKAO-MM: A Chinese Human-Level Benchmark for Multimodal Models Evaluation☆71Updated last year
- Synth-Empathy: Towards High-Quality Synthetic Empathy Data☆17Updated 8 months ago
- Official repository of MMDU dataset☆97Updated last year
- [Preprint] On the Generalization of SFT: A Reinforcement Learning Perspective with Reward Rectification.☆498Updated 2 weeks ago
- ☆59Updated last year
- HumanOmni☆205Updated 8 months ago
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆33Updated last month
- [NAACL 2024] LaDiC: Are Diffusion Models Really Inferior to Autoregressive Counterparts for Image-to-text Generation?☆42Updated last year
- 对llava官方代码的一些学习笔记☆29Updated last year
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆118Updated last year
- A project for tri-modal LLM benchmarking and instruction tuning.☆50Updated 7 months ago
- Latest Advances on Reasoning of Multimodal Large Language Models (Multimodal R1 \ Visual R1) ) 🍓☆35Updated 7 months ago
- The Next Step Forward in Multimodal LLM Alignment☆186Updated 6 months ago
- [ACM MM 2025] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆95Updated this week
- Game-RL: Synthesizing Multimodal Verifiable Game Data to Boost VLMs' General Reasoning☆105Updated last month
- Baichuan-Omni: Towards Capable Open-source Omni-modal LLM 🌊☆269Updated 9 months ago
- ☆84Updated last year
- Data and Code for CVPR 2025 paper "MMVU: Measuring Expert-Level Multi-Discipline Video Understanding"☆75Updated 8 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆194Updated 5 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆146Updated last month
- ☆109Updated 2 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆75Updated last year
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆69Updated 8 months ago
- [CVPR 2025] OmniMMI: A Comprehensive Multi-modal Interaction Benchmark in Streaming Video Contexts☆19Updated 7 months ago
- [ICCV 2025] The official code of the paper "Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration R…☆106Updated 4 months ago