md-mohaiminul / VideoRecapLinks
☆187Updated last year
Alternatives and similar repositories for VideoRecap
Users that are interested in VideoRecap are comparing it to the libraries listed below
Sorting:
- ☆154Updated 7 months ago
- A new multi-shot video understanding benchmark Shot2Story with comprehensive video summaries and detailed shot-level captions.☆150Updated 6 months ago
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆257Updated 2 weeks ago
- This is the official implementation of ICCV 2025 "Flash-VStream: Efficient Real-Time Understanding for Long Video Streams"☆221Updated last month
- Supercharged BLIP-2 that can handle videos☆120Updated last year
- ☆181Updated 2 weeks ago
- EILeV: Eliciting In-Context Learning in Vision-Language Models for Videos Through Curated Data Distributional Properties☆128Updated 9 months ago
- Offical Code for GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation☆142Updated 9 months ago
- ☆78Updated 5 months ago
- Repository for 23'MM accepted paper "Curriculum-Listener: Consistency- and Complementarity-Aware Audio-Enhanced Temporal Sentence Groundi…☆50Updated last year
- FunQA benchmarks funny, creative, and magic videos for challenging tasks including timestamp localization, video description, reasoning, …☆102Updated 8 months ago
- Long Context Transfer from Language to Vision☆390Updated 5 months ago
- [ICML 2025] Official PyTorch implementation of LongVU☆393Updated 3 months ago
- ☆182Updated 10 months ago
- [ACL 2024] GroundingGPT: Language-Enhanced Multi-modal Grounding Model☆334Updated 9 months ago
- 🌀 R2-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding (ECCV 2024)☆86Updated last year
- [CVPR 2024] VCoder: Versatile Vision Encoders for Multimodal Large Language Models☆278Updated last year
- Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding☆284Updated 2 weeks ago
- [ECCV 2024🔥] Official implementation of the paper "ST-LLM: Large Language Models Are Effective Temporal Learners"☆150Updated 11 months ago
- ☆72Updated last year
- [NeurIPS 2024] VidProM: A Million-scale Real Prompt-Gallery Dataset for Text-to-Video Diffusion Models☆160Updated 10 months ago
- VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling☆459Updated 2 months ago
- ☆83Updated 2 years ago
- ☆138Updated 10 months ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆120Updated 4 months ago
- [ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captions☆231Updated last year
- Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions (NeurIPS 2024)☆164Updated last year
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆145Updated last month
- mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)☆229Updated 2 years ago
- (2024CVPR) MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding☆324Updated last year