bytedance / video-SALMONN-2Links
video-SALMONN 2 is a powerful audio-visual large language model (LLM) that generates high-quality audio-visual video captions, which is developed by the Department of Electronic Engineering at Tsinghua University and ByteDance.
☆45Updated last week
Alternatives and similar repositories for video-SALMONN-2
Users that are interested in video-SALMONN-2 are comparing it to the libraries listed below
Sorting:
- Video dataset dedicated to portrait-mode video recognition.☆52Updated 8 months ago
- ☆78Updated 5 months ago
- A Simple Framework of Small-scale LMMs for Video Understanding☆80Updated 2 months ago
- LLaVA combines with Magvit Image tokenizer, training MLLM without an Vision Encoder. Unifying image understanding and generation.☆37Updated last year
- Precision Search through Multi-Style Inputs☆72Updated 3 weeks ago
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆114Updated 11 months ago
- The official implementation of our paper "Cockatiel: Ensembling Synthetic and Human Preferenced Training for Detailed Video Caption"☆36Updated 3 months ago
- Repository for 23'MM accepted paper "Curriculum-Listener: Consistency- and Complementarity-Aware Audio-Enhanced Temporal Sentence Groundi…☆50Updated last year
- Official implementation of paper AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language Understanding☆79Updated 4 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆124Updated 2 months ago
- LMM solved catastrophic forgetting, AAAI2025☆44Updated 4 months ago
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.☆74Updated 10 months ago
- ☆121Updated 2 months ago
- ☆187Updated last year
- [ICCV2025] A Token-level Text Image Foundation Model for Document Understanding☆111Updated 3 weeks ago
- This is the official implementation of ICCV 2025 "Flash-VStream: Efficient Real-Time Understanding for Long Video Streams"☆222Updated last month
- LinVT: Empower Your Image-level Large Language Model to Understand Videos☆82Updated 7 months ago
- [ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captions☆231Updated last year
- [CVPR 2025] Online Video Understanding: OVBench and VideoChat-Online☆59Updated last month
- official code for "Modality Curation: Building Universal Embeddings for Advanced Multimodal Information Retrieval"☆32Updated last month
- ☆155Updated 7 months ago
- The official repo for "Vidi: Large Multimodal Models for Video Understanding and Editing"☆129Updated 2 months ago
- ☆87Updated last year
- WeThink: Toward General-purpose Vision-Language Reasoning via Reinforcement Learning☆34Updated 2 months ago
- [NeurIPS 2024] Classification Done Right for Vision-Language Pre-Training☆213Updated 5 months ago
- [NeurIPS 2024] VidProM: A Million-scale Real Prompt-Gallery Dataset for Text-to-Video Diffusion Models☆160Updated 11 months ago
- [PR 2024] A large Cross-Modal Video Retrieval Dataset with Reading Comprehension☆27Updated last year
- AliTok: Towards Sequence Modeling Alignment between Tokenizer and Autoregressive Model☆44Updated last month
- Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)☆65Updated 5 months ago
- ☆35Updated 2 months ago