bytedance / video-SALMONN-2Links
video-SALMONN 2 is a powerful audio-visual large language model (LLM) that generates high-quality audio-visual video captions, which is developed by the Department of Electronic Engineering at Tsinghua University and ByteDance.
☆136Updated 3 weeks ago
Alternatives and similar repositories for video-SALMONN-2
Users that are interested in video-SALMONN-2 are comparing it to the libraries listed below
Sorting:
- ☆77Updated 8 months ago
- ☆39Updated 4 months ago
- Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)☆75Updated 10 months ago
- (ICCV2025) Official repository of paper "ViSpeak: Visual Instruction Feedback in Streaming Videos"☆43Updated 6 months ago
- An official implementation of "CapRL: Stimulating Dense Image Caption Capabilities via Reinforcement Learning"☆172Updated 2 weeks ago
- ☆82Updated 10 months ago
- ☆145Updated 5 months ago
- [CVPR 2025]Dispider: Enabling Video LLMs with Active Real-Time Interaction via Disentangled Perception, Decision, and Reaction☆155Updated 9 months ago
- ☆62Updated 6 months ago
- ☆185Updated 11 months ago
- (NIPS 2025) OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Align…☆120Updated 2 months ago
- A project for tri-modal LLM benchmarking and instruction tuning.☆53Updated 9 months ago
- The official implement of VITA, VITA15, LongVITA, VITA-Audio, VITA-VLA, and VITA-E.☆140Updated 2 months ago
- Kling-Foley: Multimodal Diffusion Transformer for High-Quality Video-to-Audio Generation☆62Updated 6 months ago
- ☆34Updated last week
- Ming - facilitating advanced multimodal understanding and generation capabilities built upon the Ling LLM.☆569Updated 2 months ago
- Video dataset dedicated to portrait-mode video recognition.☆55Updated 3 months ago
- A unified framework for controllable caption generation across images, videos, and audio. Supports multi-modal inputs and customizable ca…☆52Updated 5 months ago
- Official implementation of paper AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language Understanding☆88Updated 8 months ago
- Official implementation of "Open-o3 Video: Grounded Video Reasoning with Explicit Spatio-Temporal Evidence"☆127Updated 3 weeks ago
- [CVPR 2024] Seeing and Hearing: Open-domain Visual-Audio Generation with Diffusion Latent Aligners☆155Updated last year
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆121Updated last year
- UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks With Large Language Model☆22Updated last year
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆36Updated last month
- LongVALE: Vision-Audio-Language-Event Benchmark Towards Time-Aware Omni-Modal Perception of Long Videos. (CVPR 2025))☆54Updated 7 months ago
- A new multi-shot video understanding benchmark Shot2Story with comprehensive video summaries and detailed shot-level captions.☆164Updated 11 months ago
- ☆36Updated 7 months ago
- Official Code for "ARM-Thinker: Reinforcing Multimodal Generative Reward Models with Agentic Tool Use and Visual Reasoning"☆73Updated last month
- ☆140Updated 3 months ago
- [Arxiv 2024] Official code for MMTrail: A Multimodal Trailer Video Dataset with Language and Music Descriptions☆33Updated 11 months ago