yukw777 / VideoBLIPLinks
Supercharged BLIP-2 that can handle videos
☆123Updated 2 years ago
Alternatives and similar repositories for VideoBLIP
Users that are interested in VideoBLIP are comparing it to the libraries listed below
Sorting:
- EILeV: Eliciting In-Context Learning in Vision-Language Models for Videos Through Curated Data Distributional Properties☆131Updated last year
- FunQA benchmarks funny, creative, and magic videos for challenging tasks including timestamp localization, video description, reasoning, …☆104Updated last year
- ☆73Updated last year
- ☆58Updated last year
- ☆52Updated 2 years ago
- Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆57Updated 2 years ago
- T2VScore: Towards A Better Metric for Text-to-Video Generation☆80Updated last year
- [ICLR 2024] LLM-grounded Video Diffusion Models (LVD): official implementation for the LVD paper☆163Updated last year
- Code for the paper "GenHowTo: Learning to Generate Actions and State Transformations from Instructional Videos" published at CVPR 2024☆52Updated last year
- ☆201Updated last year
- Official Implementation for "MyVLM: Personalizing VLMs for User-Specific Queries" (ECCV 2024)☆183Updated last year
- ☆189Updated last year
- [IJCV 2025] Paragraph-to-Image Generation with Information-Enriched Diffusion Model☆105Updated 8 months ago
- ☆156Updated 10 months ago
- [NeurIPS 2023 D&B] VidChapters-7M: Video Chapters at Scale☆200Updated 2 years ago
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆96Updated last year
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆100Updated last year
- ☆140Updated last year
- Davidsonian Scene Graph (DSG) for Text-to-Image Evaluation (ICLR 2024)☆102Updated last year
- [NeurIPS 2024] VidProM: A Million-scale Real Prompt-Gallery Dataset for Text-to-Video Diffusion Models☆169Updated last year
- Learning to cut end-to-end pretrained modules☆32Updated 7 months ago
- Official Implementation of ICLR'24: Kosmos-G: Generating Images in Context with Multimodal Large Language Models☆74Updated last year
- [NeurIPS 2024 D&B Track] Official Repo for "LVD-2M: A Long-take Video Dataset with Temporally Dense Captions"☆73Updated last year
- ☆78Updated last year
- official repo for "VideoScore: Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation" [EMNLP2024]☆106Updated last week
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆151Updated 2 months ago
- [arXiv:2309.16669] Code release for "Training a Large Video Model on a Single Machine in a Day"☆136Updated 3 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆134Updated 6 months ago
- VideoCC is a dataset containing (video-URL, caption) pairs for training video-text machine learning models. It is created using an automa…☆78Updated 3 years ago
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆260Updated 4 months ago