alibaba / alimama-video-narratorLinks
Research code for ACL2024 paper: "Synchronized Video Storytelling: Generating Video Narrations with Structured Storyline"
☆39Updated 10 months ago
Alternatives and similar repositories for alimama-video-narrator
Users that are interested in alimama-video-narrator are comparing it to the libraries listed below
Sorting:
- ☆155Updated 9 months ago
- ☆196Updated last year
- [ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captions☆239Updated last year
- This is the official implementation of ICCV 2025 "Flash-VStream: Efficient Real-Time Understanding for Long Video Streams"☆241Updated 2 weeks ago
- ☆78Updated 7 months ago
- Narrative movie understanding benchmark☆76Updated 4 months ago
- LinVT: Empower Your Image-level Large Language Model to Understand Videos☆82Updated 10 months ago
- Official implementation of paper AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language Understanding☆86Updated 6 months ago
- Long Context Transfer from Language to Vision☆394Updated 7 months ago
- Structured Video Comprehension of Real-World Shorts☆211Updated last month
- A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Ability☆100Updated 11 months ago
- What Is a Good Caption? A Comprehensive Visual Caption Benchmark for Evaluating Both Correctness and Thoroughness☆24Updated 5 months ago
- [EMNLP 2025 Findings] Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models☆131Updated 2 months ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆125Updated 6 months ago
- ☆138Updated last year
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆129Updated 4 months ago
- A Large-scale Dataset for training and evaluating model's ability on Dense Text Image Generation☆81Updated last month
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.☆74Updated last year
- The official repo for "Vidi: Large Multimodal Models for Video Understanding and Editing"☆142Updated 2 months ago
- Precision Search through Multi-Style Inputs☆72Updated 3 months ago
- [ICLR 2025] TRACE: Temporal Grounding Video LLM via Casual Event Modeling☆132Updated 2 months ago
- [ECCV 2024🔥] Official implementation of the paper "ST-LLM: Large Language Models Are Effective Temporal Learners"☆150Updated last year
- [CVPR 2025] Online Video Understanding: OVBench and VideoChat-Online☆71Updated 3 weeks ago
- ☆155Updated last year
- A new multi-shot video understanding benchmark Shot2Story with comprehensive video summaries and detailed shot-level captions.☆157Updated 9 months ago
- mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)☆229Updated 2 years ago
- official code for "Modality Curation: Building Universal Embeddings for Advanced Multimodal Information Retrieval"☆36Updated 3 months ago
- 🌀 R2-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding (ECCV 2024)☆90Updated last year
- 🔥🔥MLVU: Multi-task Long Video Understanding Benchmark☆229Updated 2 months ago
- Repository for 23'MM accepted paper "Curriculum-Listener: Consistency- and Complementarity-Aware Audio-Enhanced Temporal Sentence Groundi…☆51Updated last year