bytedance / Shot2StoryView external linksLinks
A new multi-shot video understanding benchmark Shot2Story with comprehensive video summaries and detailed shot-level captions.
☆168Jan 30, 2025Updated last year
Alternatives and similar repositories for Shot2Story
Users that are interested in Shot2Story are comparing it to the libraries listed below
Sorting:
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆128Apr 4, 2025Updated 10 months ago
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆409May 8, 2025Updated 9 months ago
- A lightweight flexible Video-MLLM developed by TencentQQ Multimedia Research Team.☆74Oct 14, 2024Updated last year
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding☆684Jan 29, 2025Updated last year
- [CVPR 2024] Context-Guided Spatio-Temporal Video Grounding☆66Jun 28, 2024Updated last year
- [AAAI 2025] VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding☆126Dec 10, 2024Updated last year
- The official repository of "Video assistant towards large language model makes everything easy"☆232Dec 24, 2024Updated last year
- ☆32Jul 29, 2024Updated last year
- [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".☆294Jun 13, 2024Updated last year
- ☆80Nov 24, 2024Updated last year
- https://avocado-captioner.github.io/☆28Oct 16, 2025Updated 4 months ago
- ✨✨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆730Dec 8, 2025Updated 2 months ago
- ☆155Oct 31, 2024Updated last year
- [ICLR2026] VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling☆505Nov 18, 2025Updated 2 months ago
- ☆12Jan 10, 2025Updated last year
- 🔥🔥🔥 [IEEE TCSVT] Latest Papers, Codes and Datasets on Vid-LLMs.☆3,076Dec 20, 2025Updated last month
- Awesome papers & datasets specifically focused on long-term videos.☆354Oct 9, 2025Updated 4 months ago
- video-SALMONN 2 is a powerful audio-visual large language model (LLM) that generates high-quality audio-visual video captions, which is d…☆153Updated this week
- VideoLLM-online: Online Video Large Language Model for Streaming Video (CVPR 2024)☆639Nov 26, 2025Updated 2 months ago
- official impelmentation of Kangaroo: A Powerful Video-Language Model Supporting Long-context Video Input☆67Aug 30, 2024Updated last year
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆261Aug 5, 2025Updated 6 months ago
- Official implementation for paper Learning Grounded Vision-Language Representation for Versatile Understanding in Untrimmed Videos☆28Dec 8, 2023Updated 2 years ago
- Official implementation of HawkEye: Training Video-Text LLMs for Grounding Text in Videos☆46Apr 29, 2024Updated last year
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆138Jun 4, 2025Updated 8 months ago
- CVPR2022:Learning from Untrimmed Videos: Self-Supervised Video Representation Learning with Hierarchical Consistency☆18Aug 10, 2022Updated 3 years ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆54Mar 9, 2025Updated 11 months ago
- [ICLR 2025] TRACE: Temporal Grounding Video LLM via Casual Event Modeling☆143Aug 22, 2025Updated 5 months ago
- [ECCV 2024🔥] Official implementation of the paper "ST-LLM: Large Language Models Are Effective Temporal Learners"☆150Sep 10, 2024Updated last year
- Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding☆293Aug 5, 2025Updated 6 months ago
- WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal LLMs☆38Jan 26, 2026Updated 3 weeks ago
- A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Ability☆106Nov 28, 2024Updated last year
- Multi-modality pre-training☆509May 8, 2024Updated last year
- Official implementation for "A Simple LLM Framework for Long-Range Video Question-Answering"☆106Oct 27, 2024Updated last year
- Official repository for the paper PLLaVA☆676Jul 28, 2024Updated last year
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers☆673Oct 25, 2024Updated last year
- The code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation" [CVPR2025]☆21Feb 27, 2025Updated 11 months ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆603Oct 6, 2024Updated last year
- Learning to cut end-to-end pretrained modules☆35Apr 17, 2025Updated 10 months ago
- A repo for generating random NFTs with metadata 100% on chain!☆37Mar 8, 2024Updated last year