Tencent / VITALinks
The official implement of VITA, VITA15, LongVITA, VITA-Audio, VITA-VLA, and VITA-E.
☆135Updated last month
Alternatives and similar repositories for VITA
Users that are interested in VITA are comparing it to the libraries listed below
Sorting:
- video-SALMONN 2 is a powerful audio-visual large language model (LLM) that generates high-quality audio-visual video captions, which is d…☆132Updated this week
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆121Updated last year
- (ICCV2025) Official repository of paper "ViSpeak: Visual Instruction Feedback in Streaming Videos"☆44Updated 5 months ago
- ☆37Updated 4 months ago
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆34Updated last month
- Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)☆75Updated 9 months ago
- [NIPS2025] VideoChat-R1 & R1.5: Enhancing Spatio-Temporal Perception and Reasoning via Reinforcement Fine-Tuning☆249Updated 2 months ago
- Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces☆87Updated 6 months ago
- [CVPR 2025]Dispider: Enabling Video LLMs with Active Real-Time Interaction via Disentangled Perception, Decision, and Reaction☆153Updated 9 months ago
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆31Updated last year
- A Simple Framework of Small-scale LMMs for Video Understanding☆106Updated 6 months ago
- LMM solved catastrophic forgetting, AAAI2025☆44Updated 8 months ago
- ☆36Updated 6 months ago
- Official code of "RoboOmni: Proactive Robot Manipulation in Omni-modal Context"☆75Updated last month
- [EMNLP 2025 Findings] Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models☆138Updated 4 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆137Updated 6 months ago
- Official code for MotionBench (CVPR 2025)☆61Updated 9 months ago
- [CVPR 2025] OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?☆111Updated 5 months ago
- ☆183Updated 10 months ago
- The Next Step Forward in Multimodal LLM Alignment☆193Updated 7 months ago
- ☆144Updated 4 months ago
- ☆62Updated 3 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆202Updated 6 months ago
- ☆33Updated 8 months ago
- This is the official implementation of ICCV 2025 "Flash-VStream: Efficient Real-Time Understanding for Long Video Streams"☆255Updated 2 months ago
- Official repo for "Streaming Video Understanding and Multi-round Interaction with Memory-enhanced Knowledge" ICLR2025☆93Updated 9 months ago
- ☆80Updated 9 months ago
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆48Updated 3 months ago
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆76Updated 7 months ago
- EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via Reinforcement Learning [🔥The Exploration of R1 for General Audio-Vi…☆70Updated 7 months ago