lumiere-video / lumiere-video.github.ioLinks
☆154Updated 10 months ago
Alternatives and similar repositories for lumiere-video.github.io
Users that are interested in lumiere-video.github.io are comparing it to the libraries listed below
Sorting:
- MagicAvatar: Multimodal Avatar Generation and Animation☆621Updated last year
- ☆728Updated last year
- ☆116Updated last year
- ☆219Updated last year
- a CLI utility/library for AnimateDiff stable diffusion generation☆261Updated last week
- ☆245Updated last year
- ☆115Updated last year
- SSD-1B, an open-source text-to-image model, outperforming previous versions by being 50% smaller and 60% faster than SDXL.☆176Updated last year
- ☆128Updated last year
- ☆404Updated 11 months ago
- [SIGGRAPH ASIA 2024 TCS] AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Data☆643Updated 7 months ago
- [ICLR 2024] SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction☆939Updated 6 months ago
- An infinite number of monkeys randomly throwing paint at a canvas☆308Updated last year
- ComfyUI nodes to edit videos using Genmo Mochi☆292Updated 7 months ago
- Official implementation of "DreamPose: Fashion Image-to-Video Synthesis via Stable Diffusion"☆1,003Updated last year
- ☆181Updated last year
- Official PyTorch implementation for the paper "AnimateZero: Video Diffusion Models are Zero-Shot Image Animators"☆351Updated last year
- Faster LCM is a script which enables to transfer image styles at 45fps with RTX4090, 33fps with A100.☆95Updated last year
- ☆325Updated last year
- Finetune ModelScope's Text To Video model using Diffusers 🧨☆688Updated last year
- DesignEdit: Unify Spatial-Aware Image Editing via Training-free Inpainting with a Multi-Layered Latent Diffusion Framework☆342Updated 5 months ago
- Official Code for MotionCtrl [SIGGRAPH 2024]☆1,430Updated 3 months ago
- Official implementations for paper: Anydoor: zero-shot object-level image customization☆146Updated last year
- [ECCV 2024] FreeInit: Bridging Initialization Gap in Video Diffusion Models☆510Updated last year
- Source code for the SIGGRAPH 2024 paper "X-Portrait: Expressive Portrait Animation with Hierarchical Motion Attention"☆510Updated 10 months ago
- Stable video diffusion (img2vid) as a Cog model☆89Updated last year
- ☆87Updated last year
- Code for Text2Performer. Paper: Text2Performer: Text-Driven Human Video Generation☆328Updated last year
- Code and data for "AnyV2V: A Tuning-Free Framework For Any Video-to-Video Editing Tasks" [TMLR 2024]☆590Updated 7 months ago
- Cloth2Tex: A Customized Cloth Texture Generation Pipeline for 3D Virtual Try-On☆491Updated last year