showlab / Tune-A-Video
[ICCV 2023] Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation
☆4,290Updated last year
Alternatives and similar repositories for Tune-A-Video:
Users that are interested in Tune-A-Video are comparing it to the libraries listed below
- [ICCV 2023 Oral] Text-to-Image Diffusion Models are Zero-Shot Video Generators☆4,104Updated last year
- Official repo for consistency models.☆6,232Updated 9 months ago
- Using Low-rank adaptation to quickly fine-tune diffusion models.☆7,153Updated 9 months ago
- [ICCV 2023 Oral] "FateZero: Fusing Attentions for Zero-shot Text-based Video Editing"☆1,127Updated last year
- T2I-Adapter☆3,556Updated 6 months ago
- VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models☆4,654Updated 6 months ago
- Edit anything in images powered by segment-anything, ControlNet, StableDiffusion, etc. (ACM MM)☆3,350Updated 10 months ago
- Unofficial Implementation of DragGAN - "Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold" (DragGAN 全功…☆4,987Updated last year
- Official implementation of AnimateDiff.☆10,863Updated 5 months ago
- InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBin…☆3,213Updated 4 months ago
- ImageBind One Embedding Space to Bind Them All☆8,476Updated 5 months ago
- Inpaint anything using Segment Anything and inpainting models.☆6,801Updated 10 months ago
- Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts☆4,486Updated 3 months ago
- Official implementation of "Composer: Creative and Controllable Image Synthesis with Composable Conditions"☆1,548Updated last year
- The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.☆5,547Updated 6 months ago
- Nightly release of ControlNet 1.1☆4,860Updated 5 months ago
- Implementation of DragGAN: Interactive Point-based Manipulation on the Generative Image Manifold☆2,157Updated last year
- Image to prompt with BLIP and CLIP☆2,753Updated 8 months ago
- Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI…☆6,583Updated 7 months ago
- Using a Model to generate prompts for Model applications. / 使用模型来生成作图咒语的偷懒工具,支持 MidJourney、Stable Diffusion 等。☆1,169Updated last year
- Segment Anything for Stable Diffusion WebUI☆3,442Updated 8 months ago
- Let us control diffusion models!☆31,207Updated 10 months ago
- Open source short video automatic generation tool☆2,740Updated last year
- [CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.☆3,141Updated last month
- High-Resolution Image Synthesis with Latent Diffusion Models☆12,225Updated 10 months ago
- Open-Set Grounded Text-to-Image Generation☆2,061Updated 10 months ago
- Text To Video Synthesis Colab☆1,482Updated 9 months ago
- ☆7,720Updated 9 months ago
- Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch☆1,943Updated 8 months ago