emova-ollm / EMOVALinks
Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)
☆61Updated 4 months ago
Alternatives and similar repositories for EMOVA
Users that are interested in EMOVA are comparing it to the libraries listed below
Sorting:
- OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Alignment and Rea…☆93Updated last month
- EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via Reinforcement Learning [🔥The Exploration of R1 for General Audio-Vi…☆48Updated 2 months ago
- This is for ACL 2025 Findings Paper: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with Multi-modalitiesModels☆45Updated 2 weeks ago
- LMM solved catastrophic forgetting, AAAI2025☆44Updated 3 months ago
- ☆30Updated 2 months ago
- [ICCV'25] Explore the Limits of Omni-modal Pretraining at Scale☆114Updated 11 months ago
- ☆167Updated 6 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆162Updated 4 months ago
- Baichuan-Omni: Towards Capable Open-source Omni-modal LLM 🌊☆267Updated 6 months ago
- ☆33Updated 2 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆209Updated 7 months ago
- Ming - facilitating advanced multimodal understanding and generation capabilities built upon the Ling LLM.☆425Updated this week
- The Next Step Forward in Multimodal LLM Alignment☆170Updated 3 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆180Updated last month
- Official repository of MMDU dataset☆93Updated 10 months ago
- The official repo of One RL to See Them All: Visual Triple Unified Reinforcement Learning☆309Updated 2 months ago
- a fully open-source implementation of a GPT-4o-like speech-to-speech video understanding model.☆22Updated 4 months ago
- A project for tri-modal LLM benchmarking and instruction tuning.☆42Updated 4 months ago
- Official implementation of UnifiedReward & UnifiedReward-Think☆497Updated last week
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆345Updated 2 weeks ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆267Updated last year
- Official implementation of the Law of Vision Representation in MLLMs☆163Updated 8 months ago
- [MM2024, oral] "Self-Supervised Visual Preference Alignment" https://arxiv.org/abs/2404.10501☆56Updated last year
- [Arxiv 2024] Official code for MMTrail: A Multimodal Trailer Video Dataset with Language and Music Descriptions☆30Updated 6 months ago
- VideoChat-R1: Enhancing Spatio-Temporal Perception via Reinforcement Fine-Tuning☆169Updated 2 months ago
- 🔥🔥MLVU: Multi-task Long Video Understanding Benchmark☆215Updated last month
- Long Context Transfer from Language to Vision☆388Updated 4 months ago
- [ICCV 2025] The official code of the paper "Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration R…☆104Updated last month
- DeepDubber-V1: Towards High Quality and Dialogue, Narration, Monologue Adaptive Movie Dubbing Via Multi-Modal Chain-of-Thoughts Reasoning…☆24Updated last month
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆121Updated 2 months ago