bytedance / video-SALMONN-2Links
video-SALMONN 2 is a powerful audio-visual large language model (LLM) that generates high-quality audio-visual video captions, which is developed by the Department of Electronic Engineering at Tsinghua University and ByteDance.
☆78Updated last week
Alternatives and similar repositories for video-SALMONN-2
Users that are interested in video-SALMONN-2 are comparing it to the libraries listed below
Sorting:
- ☆78Updated 6 months ago
- ☆78Updated 5 months ago
- Ming - facilitating advanced multimodal understanding and generation capabilities built upon the Ling LLM.☆470Updated last week
- LLaVA combines with Magvit Image tokenizer, training MLLM without an Vision Encoder. Unifying image understanding and generation.☆37Updated last year
- ☆176Updated 7 months ago
- Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)☆71Updated 6 months ago
- LMM solved catastrophic forgetting, AAAI2025☆44Updated 5 months ago
- Video dataset dedicated to portrait-mode video recognition.☆52Updated 9 months ago
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆116Updated last year
- ☆128Updated 3 months ago
- ☆35Updated last month
- The official implementation of OmniFlow: Any-to-Any Generation with Multi-Modal Rectified Flows☆113Updated last month
- AliTok: Towards Sequence Modeling Alignment between Tokenizer and Autoregressive Model☆44Updated 3 months ago
- (NIPS 2025) OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Align…☆100Updated 2 weeks ago
- [CVPR 2024] Seeing and Hearing: Open-domain Visual-Audio Generation with Diffusion Latent Aligners☆150Updated last year
- The official repo for "Vidi: Large Multimodal Models for Video Understanding and Editing"☆139Updated last month
- [Arxiv 2024] Official code for MMTrail: A Multimodal Trailer Video Dataset with Language and Music Descriptions☆33Updated 8 months ago
- ☆129Updated 3 months ago
- ☆56Updated 3 months ago
- (ICCV2025) Official repository of paper "ViSpeak: Visual Instruction Feedback in Streaming Videos"☆40Updated 3 months ago
- A unified framework for controllable caption generation across images, videos, and audio. Supports multi-modal inputs and customizable ca…☆51Updated 2 months ago
- A new multi-shot video understanding benchmark Shot2Story with comprehensive video summaries and detailed shot-level captions.☆155Updated 8 months ago
- The official implementation of our paper "Cockatiel: Ensembling Synthetic and Human Preferenced Training for Detailed Video Caption"☆37Updated 4 months ago
- Kling-Foley: Multimodal Diffusion Transformer for High-Quality Video-to-Audio Generation☆59Updated 3 months ago
- ☆137Updated last year
- Official implementation of paper AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language Understanding☆83Updated 5 months ago
- A Simple Framework of Small-scale LMMs for Video Understanding☆94Updated 3 months ago
- A project for tri-modal LLM benchmarking and instruction tuning.☆48Updated 6 months ago
- [NeurIPS 2025] HermesFlow: Seamlessly Closing the Gap in Multimodal Understanding and Generation☆66Updated 2 weeks ago
- [NeurIPS 2024] VidProM: A Million-scale Real Prompt-Gallery Dataset for Text-to-Video Diffusion Models☆164Updated last year