alibaba / alimama-video-narratorLinks
Research code for ACL2024 paper: "Synchronized Video Storytelling: Generating Video Narrations with Structured Storyline"
☆38Updated 9 months ago
Alternatives and similar repositories for alimama-video-narrator
Users that are interested in alimama-video-narrator are comparing it to the libraries listed below
Sorting:
- ☆155Updated 8 months ago
- [ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captions☆236Updated last year
- Official implementation of paper AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language Understanding☆85Updated 5 months ago
- LinVT: Empower Your Image-level Large Language Model to Understand Videos☆82Updated 9 months ago
- ☆192Updated last year
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆123Updated 6 months ago
- What Is a Good Caption? A Comprehensive Visual Caption Benchmark for Evaluating Both Correctness and Thoroughness☆23Updated 4 months ago
- [CVPR 2025] Online Video Understanding: OVBench and VideoChat-Online☆66Updated last month
- [ICLR 2025] TRACE: Temporal Grounding Video LLM via Casual Event Modeling☆126Updated last month
- A new multi-shot video understanding benchmark Shot2Story with comprehensive video summaries and detailed shot-level captions.☆155Updated 8 months ago
- This is the official implementation of ICCV 2025 "Flash-VStream: Efficient Real-Time Understanding for Long Video Streams"☆236Updated 2 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆128Updated 4 months ago
- official code for "Modality Curation: Building Universal Embeddings for Advanced Multimodal Information Retrieval"☆35Updated 3 months ago
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.☆74Updated 11 months ago
- ☆78Updated 7 months ago
- A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Ability☆98Updated 10 months ago
- Narrative movie understanding benchmark☆77Updated 3 months ago
- (ICCV2025) Official repository of paper "ViSpeak: Visual Instruction Feedback in Streaming Videos"☆40Updated 3 months ago
- A Large-scale Dataset for training and evaluating model's ability on Dense Text Image Generation☆79Updated last week
- [ACL 2023] VSTAR is a multimodal dialogue dataset with scene and topic transition information☆15Updated 11 months ago
- ☆138Updated last year
- [ICCV 2025] LVBench: An Extreme Long Video Understanding Benchmark☆121Updated 3 months ago
- [EMNLP 2025 Findings] Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models☆125Updated last month
- [CVPR 2024] Bridging the Gap: A Unified Video Comprehension Framework for Moment Retrieval and Highlight Detection☆108Updated last year
- The official repo for "Vidi: Large Multimodal Models for Video Understanding and Editing"☆140Updated last month
- 🌀 R2-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding (ECCV 2024)☆89Updated last year
- Long Context Transfer from Language to Vision☆394Updated 6 months ago
- Structured Video Comprehension of Real-World Shorts☆204Updated 2 weeks ago
- [NIPS2025] VideoChat-R1 & R1.5: Enhancing Spatio-Temporal Perception and Reasoning via Reinforcement Fine-Tuning☆208Updated last week
- [ECCV 2024🔥] Official implementation of the paper "ST-LLM: Large Language Models Are Effective Temporal Learners"☆150Updated last year