bytedance / video-SALMONN-2Links
video-SALMONN 2 is a powerful audio-visual large language model (LLM) that generates high-quality audio-visual video captions, which is developed by the Department of Electronic Engineering at Tsinghua University and ByteDance.
☆146Updated last week
Alternatives and similar repositories for video-SALMONN-2
Users that are interested in video-SALMONN-2 are comparing it to the libraries listed below
Sorting:
- Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)☆76Updated 10 months ago
- ☆77Updated 9 months ago
- ☆39Updated 5 months ago
- ☆82Updated 10 months ago
- (NIPS 2025) OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Align…☆123Updated 2 months ago
- [CVPR 2025]Dispider: Enabling Video LLMs with Active Real-Time Interaction via Disentangled Perception, Decision, and Reaction☆166Updated 10 months ago
- Official implementation of paper AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language Understanding☆88Updated 9 months ago
- (ICCV2025) Official repository of paper "ViSpeak: Visual Instruction Feedback in Streaming Videos"☆44Updated 7 months ago
- Kling-Foley: Multimodal Diffusion Transformer for High-Quality Video-to-Audio Generation☆63Updated 7 months ago
- (ICLR 2026) An official implementation of "CapRL: Stimulating Dense Image Caption Capabilities via Reinforcement Learning"☆182Updated last week
- The official implement of VITA, VITA15, LongVITA, VITA-Audio, VITA-VLA, and VITA-E.☆145Updated 3 months ago
- ☆185Updated 11 months ago
- ☆63Updated 7 months ago
- ☆38Updated 2 weeks ago
- ☆37Updated 8 months ago
- DeepDubber-V1: Towards High Quality and Dialogue, Narration, Monologue Adaptive Movie Dubbing Via Multi-Modal Chain-of-Thoughts Reasoning…☆28Updated 4 months ago
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆122Updated last year
- A Simple Framework of Small-scale LMMs for Video Understanding☆108Updated 7 months ago
- ☆141Updated 3 months ago
- LongVALE: Vision-Audio-Language-Event Benchmark Towards Time-Aware Omni-Modal Perception of Long Videos. (CVPR 2025))☆56Updated 7 months ago
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆39Updated last week
- [CVPR 2024] Seeing and Hearing: Open-domain Visual-Audio Generation with Diffusion Latent Aligners☆155Updated last year
- LMM solved catastrophic forgetting, AAAI2025☆45Updated 9 months ago
- Ming - facilitating advanced multimodal understanding and generation capabilities built upon the Ling LLM.☆575Updated 3 months ago
- Code for the Molmo2 Vision-Language Model☆139Updated last month
- A project for tri-modal LLM benchmarking and instruction tuning.☆56Updated 10 months ago
- ☆147Updated 6 months ago
- Official implementation of the paper "Bind-Your-Avatar: Multi-Talking-Character Video Generation with Dynamic 3D-mask-based Embedding Rou…☆33Updated 4 months ago
- Official Code for "ARM-Thinker: Reinforcing Multimodal Generative Reward Models with Agentic Tool Use and Visual Reasoning"☆79Updated 2 months ago
- Video dataset dedicated to portrait-mode video recognition.☆55Updated 3 months ago