CharlieDreemur / AI-Video-ConverterLinks
AI Video Converter Based on ControlNet
☆74Updated 2 years ago
Alternatives and similar repositories for AI-Video-Converter
Users that are interested in AI-Video-Converter are comparing it to the libraries listed below
Sorting:
- ☆82Updated 2 years ago
- Wav2Lip UHQ Improvement with ControlNet 1.1☆74Updated 2 years ago
- ☆114Updated last year
- just a little project for fast face swapping using one picture☆132Updated 11 months ago
- Orchestrating AI for stunning lip-synced videos. Effortless workflow, exceptional results, all in one place.☆75Updated last month
- Prompt Generator for Stable Diffusion/Midjourney on GPT-2 models☆95Updated 2 years ago
- ☆118Updated last year
- Forked version of AnimateDiff, attempts to add init images. If you are look into original repo, please go to https://github.com/guoyww/a…☆151Updated 2 years ago
- ☆135Updated 2 years ago
- Proof of concept for control landmarks in diffusion models!☆89Updated 2 years ago
- for those who wants some speed☆105Updated last year
- This is an implementation of iperov's DeepFaceLab and DeepFaceLive in Stable Diffusion Web UI 1111 by AUTOMATIC1111.☆109Updated 9 months ago
- Unofficial pytorch implementation of TryOnDiffusion☆63Updated 2 years ago
- Fast running Live Portrait with TensorRT and ONNX models☆167Updated last year
- This is a HeadSwap project not only face☆34Updated 2 years ago
- a CLI utility/library for AnimateDiff stable diffusion generation☆262Updated this week
- This is a wrapper of rem_bg for auto1111's stable diffusion gui. It can do clothing segmentation, background removal, and background mask…☆80Updated last year
- ☆82Updated 11 months ago
- MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model☆21Updated last year
- A colab notebook for video super resolution using GFPGAN☆36Updated 2 years ago
- ☆62Updated last year
- ☆53Updated last year
- Translate a video to some AI generated stuff, extension script for AUTOMATIC1111/stable-diffusion-webui.☆53Updated 2 years ago
- Easily run text-to-video diffusion with customized video length, fps, and dimensions on 4GB video cards or on CPU.☆109Updated 10 months ago
- ☆55Updated last year
- Image swapping, realtime swapping☆34Updated 2 years ago
- ☆43Updated last year
- ControlAnimate Library☆48Updated last year
- An Implementation of Ebsynth for video stylization, and the original ebsynth for image stylization as an importable python library!☆119Updated last year
- An experiment to use Stable Diffusion for cloth virtual try-on task. Repo contains modified Dreambooth training script☆76Updated 2 years ago