Zeqiang-Lai / Anything2Image
Generate image from anything with ImageBind and Stable Diffusion
☆193Updated last year
Alternatives and similar repositories for Anything2Image:
Users that are interested in Anything2Image are comparing it to the libraries listed below
- BindDiffusion: One Diffusion Model to Bind Them All☆165Updated last year
- Fine-tuning "ImageBind One Embedding Space to Bind Them All" with LoRA☆178Updated last year
- Zero-Shot Video Editing Using Off-The-Shelf Image Diffusion Models☆351Updated last year
- A simple script that reads a directory of videos, grabs a random frame, and automatically discovers a prompt for it☆133Updated last year
- Retrieval-Augmented Video Generation for Telling a Story☆253Updated last year
- Mini-DALLE3: Interactive Text to Image by Prompting Large Language Models☆306Updated last year
- [IJCV'24] AutoStory: Generating Diverse Storytelling Images with Minimal Human Effort☆146Updated 2 months ago
- The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".☆295Updated last year
- Subject-Diffusion:Open Domain Personalized Text-to-Image Generation without Test-time Fine-tuning☆289Updated 7 months ago
- ☆164Updated last year
- Video-P2P: Video Editing with Cross-attention Control☆394Updated 7 months ago
- Implementation of DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing☆227Updated last year
- LVDM: Latent Video Diffusion Models for High-Fidelity Long Video Generation☆468Updated 3 months ago
- Make-A-Protagonist: Generic Video Editing with An Ensemble of Experts☆320Updated last year
- ☆173Updated 7 months ago
- ☆140Updated last year
- Code for Text2Performer. Paper: Text2Performer: Text-Driven Human Video Generation☆325Updated last year
- [ICLR 2024] Code for FreeNoise based on VideoCrafter☆398Updated 7 months ago
- ☆174Updated 7 months ago
- [SIGGRAPH Asia 2024] ReVersion: Diffusion-Based Relation Inversion from Images☆500Updated 2 months ago
- Official Implementation of "Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models"☆382Updated last year
- The pytorch implementation of our CVPR 2023 paper "Conditional Image-to-Video Generation with Latent Flow Diffusion Models"☆459Updated 8 months ago
- [IEEE TVCG 2024] Customized Video Generation Using Textual and Structural Guidance☆188Updated 11 months ago
- [ICLR 2024] Official pytorch implementation of "ControlVideo: Training-free Controllable Text-to-Video Generation"☆811Updated last year
- [IJCV] FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention☆687Updated last month
- [CVPR'23] MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation☆417Updated 8 months ago
- ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation (TMLR 2024)☆237Updated 7 months ago
- Image Editing Anything☆113Updated last year
- Relate Anything Model is capable of taking an image as input and utilizing SAM to identify the corresponding mask within the image.☆449Updated last year
- Official PyTorch implementation of the paper "In-Context Learning Unlocked for Diffusion Models"☆385Updated 10 months ago