emova-ollm / EMOVALinks
Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)
☆71Updated 6 months ago
Alternatives and similar repositories for EMOVA
Users that are interested in EMOVA are comparing it to the libraries listed below
Sorting:
- (NIPS 2025) OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Align…☆100Updated 2 weeks ago
- This is for ACL 2025 Findings Paper: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with Multi-modalitiesModels☆61Updated last month
- EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via Reinforcement Learning [🔥The Exploration of R1 for General Audio-Vi…☆58Updated 4 months ago
- LMM solved catastrophic forgetting, AAAI2025☆44Updated 5 months ago
- A project for tri-modal LLM benchmarking and instruction tuning.☆48Updated 6 months ago
- ☆176Updated 8 months ago
- video-SALMONN 2 is a powerful audio-visual large language model (LLM) that generates high-quality audio-visual video captions, which is d…☆80Updated 2 weeks ago
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆116Updated last year
- ☆32Updated 4 months ago
- (ICCV2025) Official repository of paper "ViSpeak: Visual Instruction Feedback in Streaming Videos"☆40Updated 3 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆128Updated 4 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆189Updated 3 months ago
- Official implementation of paper AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language Understanding☆85Updated 5 months ago
- Official repository of MMDU dataset☆95Updated last year
- Baichuan-Omni: Towards Capable Open-source Omni-modal LLM 🌊☆269Updated 8 months ago
- Ming - facilitating advanced multimodal understanding and generation capabilities built upon the Ling LLM.☆475Updated 2 weeks ago
- ☆35Updated last month
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆123Updated 6 months ago
- Accelerating the development of large multimodal models (LMMs) with one-click evaluation module - lmms-eval.☆62Updated 2 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆211Updated 9 months ago
- Long Context Transfer from Language to Vision☆394Updated 6 months ago
- MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources☆197Updated 2 weeks ago
- The official repo of One RL to See Them All: Visual Triple Unified Reinforcement Learning☆317Updated 4 months ago
- ☆78Updated 7 months ago
- The Next Step Forward in Multimodal LLM Alignment☆181Updated 5 months ago
- [CVPR 2025] OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?☆88Updated 2 months ago
- [CVPR 2025]Dispider: Enabling Video LLMs with Active Real-Time Interaction via Disentangled Perception, Decision, and Reaction☆137Updated 6 months ago
- ☆78Updated 5 months ago
- [NIPS2025] VideoChat-R1 & R1.5: Enhancing Spatio-Temporal Perception and Reasoning via Reinforcement Fine-Tuning☆208Updated last week
- Official repository of 'ScaleCap: Inference-Time Scalable Image Captioning via Dual-Modality Debiasing’☆57Updated 3 months ago