camenduru / one-shot-talking-face-colab
☆147Updated last year
Alternatives and similar repositories for one-shot-talking-face-colab:
Users that are interested in one-shot-talking-face-colab are comparing it to the libraries listed below
- ☆40Updated last year
- ☆52Updated 2 years ago
- ☆54Updated last year
- Wav2Lip UHQ Improvement with ControlNet 1.1☆73Updated last year
- a fork implementation of SIGGRAPH 2020 paper Interactive Video Stylization Using Few-Shot Patch-Based Training☆106Updated 2 years ago
- ☆44Updated last year
- This is a Gradio WebUI working with the Diffusers format of Stable Diffusion☆80Updated 2 years ago
- ☆29Updated 2 years ago
- ☆62Updated last year
- Towards Robust Blind Face Restoration with Codebook Lookup Transformer☆28Updated last year
- Code for One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning (AAAI 2022)☆357Updated 2 years ago
- ☆97Updated last year
- adaptation of huggingface's dreambooth training script to support depth2img☆101Updated 2 years ago
- ☆115Updated last year
- Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor…☆33Updated last year
- ☆100Updated 2 years ago
- ☆43Updated last year
- AI video temporal coherence Lab☆56Updated 2 years ago
- Fork of Controlnet for 2 input channels☆60Updated last year
- Let us control diffusion models!☆36Updated last year
- ☆35Updated last year
- ☆35Updated last year
- ☆182Updated last year
- ☆53Updated last year
- ☆63Updated last year
- ☆83Updated 9 months ago
- 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch☆26Updated 2 years ago
- ☆78Updated last year
- Alternative to Flawless AI's TrueSync. Make lips in video match provided audio using the power of Wav2Lip and GFPGAN.☆122Updated 8 months ago
- Faster LCM is a script which enables to transfer image styles at 45fps with RTX4090, 33fps with A100.☆95Updated last year