showlab / Tune-A-Video
[ICCV 2023] Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation
☆4,247Updated last year
Related projects ⓘ
Alternatives and complementary repositories for Tune-A-Video
- [ICCV 2023 Oral] Text-to-Image Diffusion Models are Zero-Shot Video Generators☆4,042Updated last year
- VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models☆4,556Updated 3 months ago
- Edit anything in images powered by segment-anything, ControlNet, StableDiffusion, etc. (ACM MM)☆3,319Updated 8 months ago
- Inpaint anything using Segment Anything and inpainting models.☆6,556Updated 8 months ago
- T2I-Adapter☆3,466Updated 4 months ago
- InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBin…☆3,205Updated 2 months ago
- Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI…☆6,482Updated 5 months ago
- Official repo for consistency models.☆6,152Updated 7 months ago
- ☆6,353Updated 8 months ago
- Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts☆4,448Updated last month
- Using Low-rank adaptation to quickly fine-tune diffusion models.☆7,048Updated 7 months ago
- [ICCV 2023 Oral] "FateZero: Fusing Attentions for Zero-shot Text-based Video Editing"☆1,110Updated last year
- ☆7,680Updated 6 months ago
- Unofficial Implementation of DragGAN - "Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold" (DragGAN 全功…☆4,995Updated last year
- ☆3,123Updated 5 months ago
- [CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.☆3,057Updated 2 months ago
- Implementation of DragGAN: Interactive Point-based Manipulation on the Generative Image Manifold☆2,163Updated last year
- Image to prompt with BLIP and CLIP☆2,693Updated 5 months ago
- Official implementation of AnimateDiff.☆10,548Updated 3 months ago
- Open-source and strong foundation image recognition models.☆2,860Updated 3 months ago
- Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion.☆8,284Updated 10 months ago
- Nightly release of ControlNet 1.1☆4,721Updated 3 months ago
- Text To Video Synthesis Colab☆1,457Updated 7 months ago
- Open-Set Grounded Text-to-Image Generation☆2,008Updated 8 months ago
- Official implementation of "Composer: Creative and Controllable Image Synthesis with Composable Conditions"☆1,537Updated 10 months ago
- Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and …☆15,104Updated 2 months ago
- [NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"☆4,377Updated 2 months ago
- The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.☆5,231Updated 4 months ago
- Segment Anything for Stable Diffusion WebUI☆3,399Updated 6 months ago