bytedance / Shot2StoryLinks
A new multi-shot video understanding benchmark Shot2Story with comprehensive video summaries and detailed shot-level captions.
☆155Updated 8 months ago
Alternatives and similar repositories for Shot2Story
Users that are interested in Shot2Story are comparing it to the libraries listed below
Sorting:
- ☆194Updated last year
- ☆78Updated 7 months ago
- The official repo for "Vidi: Large Multimodal Models for Video Understanding and Editing"☆140Updated last month
- This is the official implementation of ICCV 2025 "Flash-VStream: Efficient Real-Time Understanding for Long Video Streams"☆236Updated 3 months ago
- ☆155Updated 8 months ago
- ☆182Updated 2 months ago
- [NeurIPS 2024] VidProM: A Million-scale Real Prompt-Gallery Dataset for Text-to-Video Diffusion Models☆162Updated last year
- official implementation of VideoDirectorGPT: Consistent Multi-scene Video Generation via LLM-Guided Planning (COLM 2024)☆175Updated last year
- InteractiveVideo: User-Centric Controllable Video Generation with Synergistic Multimodal Instructions☆129Updated last year
- [ICML 2025] Official PyTorch implementation of LongVU☆399Updated 5 months ago
- Offical Code for GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation☆142Updated 11 months ago
- Long Context Transfer from Language to Vision☆394Updated 6 months ago
- Multimodal Models in Real World☆544Updated 7 months ago
- Supercharged BLIP-2 that can handle videos☆122Updated last year
- [IJCV 2025] Paragraph-to-Image Generation with Information-Enriched Diffusion Model☆104Updated 6 months ago
- Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding☆288Updated 2 months ago
- [IJCV'24] AutoStory: Generating Diverse Storytelling Images with Minimal Human Effort☆151Updated 10 months ago
- ☆185Updated last year
- Code release for our NeurIPS 2024 Spotlight paper "GenArtist: Multimodal LLM as an Agent for Unified Image Generation and Editing"☆152Updated 11 months ago
- ☆177Updated 2 years ago
- Official GPU implementation of the paper "PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance"☆130Updated 10 months ago
- [CVPR 2024] VCoder: Versatile Vision Encoders for Multimodal Large Language Models☆277Updated last year
- EILeV: Eliciting In-Context Learning in Vision-Language Models for Videos Through Curated Data Distributional Properties☆131Updated 11 months ago
- Code repository for T2V-Turbo and T2V-Turbo-v2☆302Updated 8 months ago
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with g…☆494Updated 2 months ago
- Official implementation of paper AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language Understanding☆85Updated 5 months ago
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆257Updated 2 months ago
- ☆82Updated 2 years ago
- 💡 VideoMind: A Chain-of-LoRA Agent for Long Video Reasoning☆265Updated this week
- Research code for ACL2024 paper: "Synchronized Video Storytelling: Generating Video Narrations with Structured Storyline"☆38Updated 9 months ago