AILab-CVC / Animate-A-Story
Retrieval-Augmented Video Generation for Telling a Story
☆252Updated last year
Alternatives and similar repositories for Animate-A-Story:
Users that are interested in Animate-A-Story are comparing it to the libraries listed below
- [ICLR 2024] Code for FreeNoise based on VideoCrafter☆398Updated 7 months ago
- [SIGGRAPH Asia 2023] An interactive story visualization tool that support multiple characters☆261Updated 10 months ago
- [IEEE TVCG 2024] Customized Video Generation Using Textual and Structural Guidance☆188Updated 11 months ago
- The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".☆295Updated last year
- [SIGGRAPH Asia 2024 (Journal Track)]StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter☆215Updated 7 months ago
- Official implementation for "ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing"☆225Updated last year
- Make-A-Protagonist: Generic Video Editing with An Ensemble of Experts☆320Updated last year
- Official Implementation of "Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models"☆381Updated last year
- AnimateDiff I2V version.☆183Updated 11 months ago
- Implementation of HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models☆167Updated last year
- Zero-Shot Video Editing Using Off-The-Shelf Image Diffusion Models☆350Updated last year
- [CVPR2024] VideoBooth: Diffusion-based Video Generation with Image Prompts☆284Updated 8 months ago
- [IJCV'24] AutoStory: Generating Diverse Storytelling Images with Minimal Human Effort☆146Updated 2 months ago
- Implementation of DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing☆227Updated last year
- Subject-Diffusion:Open Domain Personalized Text-to-Image Generation without Test-time Fine-tuning☆289Updated 7 months ago
- ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation (TMLR 2024)☆237Updated 7 months ago
- I2V-Adapter: A General Image-to-Video Adapter for Video Diffusion Models☆206Updated last year
- Code for Text2Performer. Paper: Text2Performer: Text-Driven Human Video Generation☆325Updated last year
- [SIGGRAPH 2024] Motion I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling☆146Updated 4 months ago
- [NeurIPS'23] "MagicBrush: A Manually Annotated Dataset for Instruction-Guided Image Editing".☆330Updated 8 months ago
- Official implementations for paper: LivePhoto: Real Image Animation with Text-guided Motion Control☆186Updated last year
- [ECCV 2024] FreeInit: Bridging Initialization Gap in Video Diffusion Models☆507Updated last year
- Implementation of Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators☆85Updated last year
- ☆143Updated 7 months ago
- Official Implementation for "ConceptLab: Creative Generation using Diffusion Prior Constraints"☆249Updated last year
- Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models (ICLR 2024)☆135Updated 8 months ago
- Video-P2P: Video Editing with Cross-attention Control☆393Updated 6 months ago
- Official PyTorch implementation for the paper "AnimateZero: Video Diffusion Models are Zero-Shot Image Animators"☆349Updated last year
- AnimationDiff with train☆120Updated 11 months ago
- [ICLR 2024] Github Repo for "HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion"☆495Updated last year