ExponentialML / Video-BLIP2-PreprocessorLinks
A simple script that reads a directory of videos, grabs a random frame, and automatically discovers a prompt for it
☆136Updated last year
Alternatives and similar repositories for Video-BLIP2-Preprocessor
Users that are interested in Video-BLIP2-Preprocessor are comparing it to the libraries listed below
Sorting:
- EILeV: Eliciting In-Context Learning in Vision-Language Models for Videos Through Curated Data Distributional Properties☆124Updated 6 months ago
- ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation (TMLR 2024)☆240Updated 11 months ago
- official implementation of VideoDirectorGPT: Consistent Multi-scene Video Generation via LLM-Guided Planning (COLM 2024)☆172Updated 9 months ago
- The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".☆299Updated last year
- A simple magic animate pipeline including densepose inference.☆36Updated last year
- Retrieval-Augmented Video Generation for Telling a Story☆255Updated last year
- Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models (ICLR 2024)☆139Updated last year
- A new multi-shot video understanding benchmark Shot2Story with comprehensive video summaries and detailed shot-level captions.☆132Updated 4 months ago
- [IEEE TVCG 2024] Customized Video Generation Using Textual and Structural Guidance☆193Updated last year
- ☆186Updated 10 months ago
- [IJCV'24] AutoStory: Generating Diverse Storytelling Images with Minimal Human Effort☆151Updated 6 months ago
- Implementation of long video generation☆78Updated last year
- The HD-VG-130M Dataset☆117Updated last year
- AnimateDiff I2V version.☆186Updated last year
- (CVPR 2024) Official code for paper "Towards Language-Driven Video Inpainting via Multimodal Large Language Models"☆95Updated last year
- [NeurIPS 2024] VidProM: A Million-scale Real Prompt-Gallery Dataset for Text-to-Video Diffusion Models☆150Updated 8 months ago
- Official implementation for "ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing"☆229Updated last year
- ☆143Updated 11 months ago
- [CVPR 2024] Intelligent Grimm - Open-ended Visual Storytelling via Latent Diffusion Models☆249Updated 6 months ago
- Official Pytorch Implementation for "VidToMe: Video Token Merging for Zero-Shot Video Editing" (CVPR 2024)☆219Updated 4 months ago
- InteractiveVideo: User-Centric Controllable Video Generation with Synergistic Multimodal Instructions☆128Updated last year
- Implementation of DiffusionOverDiffusion architecture presented in NUWA-XL in a form of ControlNet-like module on top of ModelScope text2…☆85Updated 2 years ago
- Interactive Video Generation via Masked-Diffusion☆81Updated last year
- Code for the paper "Pix2Video: Video Editing using Image Diffusion"☆71Updated last year
- [NeurIPS 2024 Spotlight] The official implement of research paper "MotionBooth: Motion-Aware Customized Text-to-Video Generation"☆132Updated 7 months ago
- ☆148Updated last year
- Supercharged BLIP-2 that can handle videos☆118Updated last year
- Official Pytorch Implementation of Synthesizing Coherent Story with Auto-Regressive Latent Diffusion Models☆199Updated last year
- [TOG 2024]StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter☆238Updated 2 months ago
- I2V-Adapter: A General Image-to-Video Adapter for Video Diffusion Models☆204Updated last year