AILab-CVC / TaleCrafterLinks
[SIGGRAPH Asia 2023] An interactive story visualization tool that support multiple characters
☆269Updated last year
Alternatives and similar repositories for TaleCrafter
Users that are interested in TaleCrafter are comparing it to the libraries listed below
Sorting:
- Retrieval-Augmented Video Generation for Telling a Story☆259Updated last year
- [IEEE TVCG 2024] Customized Video Generation Using Textual and Structural Guidance☆195Updated last year
- Official PyTorch implementation for the paper "AnimateZero: Video Diffusion Models are Zero-Shot Image Animators"☆352Updated 2 years ago
- Implementation of HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models☆175Updated 2 years ago
- The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".☆304Updated 2 months ago
- Unofficial implementation of the paper "The Chosen One: Consistent Characters in Text-to-Image Diffusion Models"☆269Updated last year
- [TOG 2024]StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter☆264Updated 8 months ago
- Official implementations for paper: LivePhoto: Real Image Animation with Text-guided Motion Control☆194Updated 3 weeks ago
- [ICLR 2024] Code for FreeNoise based on VideoCrafter☆424Updated 4 months ago
- I2V-Adapter: A General Image-to-Video Adapter for Video Diffusion Models☆205Updated 2 years ago
- Make-A-Protagonist: Generic Video Editing with An Ensemble of Experts☆323Updated 2 years ago
- This is an unofficial PyTorch implementation of StyleDrop: Text-to-Image Generation in Any Style.☆225Updated 2 years ago
- Official Implementation of "Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models"☆401Updated 2 years ago
- [ECCV 2024] FreeInit: Bridging Initialization Gap in Video Diffusion Models☆538Updated last year
- Official implementation for "ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing"☆231Updated 2 years ago
- Implementation of DiffusionOverDiffusion architecture presented in NUWA-XL in a form of ControlNet-like module on top of ModelScope text2…☆86Updated 2 years ago
- AnimateDiff I2V version.☆186Updated last year
- Subject-Diffusion:Open Domain Personalized Text-to-Image Generation without Test-time Fine-tuning☆314Updated last year
- Implementation of DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing☆226Updated 2 years ago
- ☆143Updated last year
- [TMM 2025] StableIdentity: Inserting Anybody into Anywhere at First Sight 🔥☆260Updated last year
- Code for Text2Performer. Paper: Text2Performer: Text-Driven Human Video Generation☆328Updated 2 years ago
- Official Implementation for "ConceptLab: Creative Generation using Diffusion Prior Constraints"☆254Updated 2 years ago
- A simple magic animate pipeline including densepose inference.☆37Updated 2 years ago
- Put Your Face Everywhere in Seconds.☆313Updated 2 years ago
- official implementation of VideoDirectorGPT: Consistent Multi-scene Video Generation via LLM-Guided Planning (COLM 2024)☆177Updated last year
- Official Implementation of 'Inserting Anybody in Diffusion Models via Celeb Basis'☆257Updated 2 years ago
- Official PyTorch codes for the paper: "ViCo: Detail-Preserving Visual Condition for Personalized Text-to-Image Generation"☆244Updated last year
- AnimationDiff with train☆125Updated last year
- ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation [TMLR 2024]☆256Updated last year