showlab / Tune-A-Video
[ICCV 2023] Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation
☆4,252Updated last year
Related projects ⓘ
Alternatives and complementary repositories for Tune-A-Video
- [ICCV 2023 Oral] Text-to-Image Diffusion Models are Zero-Shot Video Generators☆4,057Updated last year
- Official repo for consistency models.☆6,168Updated 7 months ago
- VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models☆4,566Updated 4 months ago
- Using Low-rank adaptation to quickly fine-tune diffusion models.☆7,070Updated 7 months ago
- Edit anything in images powered by segment-anything, ControlNet, StableDiffusion, etc. (ACM MM)☆3,334Updated 8 months ago
- ☆7,690Updated 7 months ago
- Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI…☆6,504Updated 5 months ago
- T2I-Adapter☆3,482Updated 4 months ago
- [ICCV 2023 Oral] "FateZero: Fusing Attentions for Zero-shot Text-based Video Editing"☆1,113Updated last year
- ☆3,137Updated 6 months ago
- InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBin…☆3,206Updated 3 months ago
- Unofficial Implementation of DragGAN - "Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold" (DragGAN 全功…☆4,998Updated last year
- Inpaint anything using Segment Anything and inpainting models.☆6,601Updated 8 months ago
- Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts☆4,455Updated 2 months ago
- Text To Video Synthesis Colab☆1,466Updated 7 months ago
- Image to prompt with BLIP and CLIP☆2,704Updated 6 months ago
- Nightly release of ControlNet 1.1☆4,742Updated 3 months ago
- Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch☆1,924Updated 6 months ago
- pix2pix3D: Generating 3D Objects from 2D User Inputs☆1,663Updated last year
- Official implementation of AnimateDiff.☆10,603Updated 3 months ago
- [NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"☆4,391Updated 3 months ago
- ☆6,381Updated 8 months ago
- Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)☆1,866Updated 11 months ago
- Code and models for the paper "One Transformer Fits All Distributions in Multi-Modal Diffusion"☆1,374Updated last year
- Official implementation of "DreamPose: Fashion Image-to-Video Synthesis via Stable Diffusion"☆976Updated last year
- Open-Set Grounded Text-to-Image Generation☆2,016Updated 8 months ago
- Segment Anything for Stable Diffusion WebUI☆3,409Updated 6 months ago
- [CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.☆3,074Updated 2 months ago
- Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and …☆15,175Updated 2 months ago