transpchan / Live3D-v3Links
General Prior for Anime - 1
☆45Updated 2 years ago
Alternatives and similar repositories for Live3D-v3
Users that are interested in Live3D-v3 are comparing it to the libraries listed below
Sorting:
- A tool for converting MikuMikuDance models (.pmx) with motions(.vmd) to UltraDensePose sequences.☆52Updated 2 years ago
- MMD2depth use MikuMikuDance model in Stable Diffusion 2.0 depth2img☆29Updated 2 years ago
- stabilizing SD generated video☆73Updated 2 years ago
- Generate images from an initial frame and text☆37Updated last year
- ☆98Updated 2 years ago
- A example pipeline to use InstructPix2Pix and the associated fine-tuned motion module☆31Updated last year
- API for AnimeRun dataset☆87Updated last year
- implementation of AnimateDiff.☆32Updated 2 years ago
- Talking head animation☆27Updated last year
- Official Code for "Joint Geometric-Semantic Driven Character Line Drawing Generation"☆10Updated 2 years ago
- iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis☆52Updated 2 years ago
- Text-Guided Generation of Full-Body Image with Preserved Reference Face for Customized Animation☆24Updated last year
- Stylizing Video by Example (Jamriska et al., 2019)☆47Updated last year
- Vid Driven Portrait Animation 🤢😷☆18Updated last year
- Official Repo for MoCha Towards Movie-Grade Talking Character Synthesis☆38Updated last month
- ☆20Updated 2 years ago
- Implementation of GigaGAN in pytorch☆57Updated 2 years ago
- AI video temporal coherence Lab☆56Updated 2 years ago
- An open source implementation of Microsoft's VALL-E X zero-shot TTS model. Demo is available in https://plachtaa.github.io☆16Updated last year
- ☆52Updated 2 years ago
- RIFE with IFUNet, FusionNet and RefineNet☆12Updated 3 years ago
- Train LoRA using Microsoft's official implementation with Stable Diffusion models.☆32Updated 2 years ago
- FlexiFilm: Long Video Generation with Flexible Conditions☆31Updated last year
- ControlNet control image preprocess library☆15Updated 2 years ago
- ☆8Updated last year
- Use denoising diffusion probabilistic model to do the anime colorization task.☆50Updated 3 years ago
- ☆14Updated 2 years ago
- Animatediff implementation. Includes a ControlNet pipeline.☆19Updated last year
- Majesty Diffusion by @Dango233 and @apolinario (@multimodalart)☆25Updated 2 years ago
- ☆27Updated 2 years ago