bycloudai / animegan2-pytorch-WindowsLinks
PyTorch implementation of AnimeGANv2
☆40Updated 4 years ago
Alternatives and similar repositories for animegan2-pytorch-Windows
Users that are interested in animegan2-pytorch-Windows are comparing it to the libraries listed below
Sorting:
- [CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting☆52Updated 3 years ago
- ☆69Updated 4 years ago
- ☆14Updated 3 years ago
- a fork implementation of SIGGRAPH 2020 paper Interactive Video Stylization Using Few-Shot Patch-Based Training☆106Updated 3 years ago
- Official PyTorch repo for JoJoGAN: One Shot Face Stylization with VIDEO and TRAINING☆17Updated 3 years ago
- A sketch extractor for anime/illustration.☆21Updated 4 years ago
- Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation☆23Updated 3 years ago
- "Interactive Video Stylization Using Few-Shot Patch-Based Training" by O. Texler et al. in PyTorch Lightning☆70Updated 3 years ago
- FILM: Frame Interpolation for Large Motion, In arXiv 2022.☆29Updated 3 years ago
- converts huggingface diffusers stablediffussion models to stablediffusion ckpt files usable in most opensource tools☆53Updated 2 years ago
- This is a wrapper of rem_bg for auto1111's stable diffusion gui. It can do clothing segmentation, background removal, and background mask…☆80Updated 2 years ago
- Export blender camera animations to Deforum Diffusion notebook format.☆64Updated last year
- Home of the Chunkmogrify project☆16Updated 4 years ago
- StyleFlow: Attribute-conditioned Exploration of StyleGAN-generated Images using Conditional Continuous Normalizing Flows☆51Updated 4 years ago
- Code for Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR 2021)☆26Updated 4 years ago
- ☆46Updated 3 years ago
- resources for creating Ininite zoom video using Stable Diffiusion, you can use multiple prompts and it is easy to use.☆88Updated 2 years ago
- Normal Maps for Stable Diffusion WebUI☆81Updated last year
- An extension to allow managing custom depth inputs to Stable Diffusion depth2img models for the stable-diffusion-webui repo.☆72Updated 2 years ago
- a1111 sd WebUI extention version of ZoeDepth☆66Updated 2 years ago
- [CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting☆27Updated last year
- [CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.☆78Updated 3 years ago
- 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch☆26Updated 3 years ago
- Asymmetric Tiling for stable-diffusion-webui☆213Updated 3 years ago
- EbSynth is hard to use... Lot's of turning videos into image sequences, resizing style images to fit the original frames, renaming the st…☆40Updated 2 years ago
- A simple program to create easy animated effects from an image, and convert them into a set amount of exported frames.☆39Updated 2 years ago
- ☆94Updated 3 years ago
- ☆13Updated 2 years ago
- Refactor of the Deforum Stable Diffusion notebook (featuring video_init) https://colab.research.google.com/github/deforum/stable-diffusio…☆107Updated 3 years ago
- Automatic1111 Stable Diffusion WebUI extension, increase consistency between images by generating in same latent space.☆82Updated 2 years ago