yoyo-nb / Thin-Plate-Spline-Motion-ModelLinks
[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
☆3,587Updated last year
Alternatives and similar repositories for Thin-Plate-Spline-Motion-Model
Users that are interested in Thin-Plate-Spline-Motion-Model are comparing it to the libraries listed below
Sorting:
- Wav2Lip UHQ extension for Automatic1111☆1,410Updated last year
- Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation☆995Updated last year
- Code for Motion Representations for Articulated Animation paper☆1,267Updated 3 months ago
- Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation (SIGGRAPH Asia 2021)☆1,278Updated 2 years ago
- GeneFace: Generalized and High-Fidelity 3D Talking Face Synthesis; ICLR 2023; Official code☆2,632Updated 10 months ago
- FILM: Frame Interpolation for Large Motion, In ECCV 2022.☆3,047Updated last year
- [CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation☆13,193Updated last year
- High quality Lip sync☆1,134Updated last year
- Code Repository for CVPR 2023 Paper "PanoHead: Geometry-Aware 3D Full-Head Synthesis in 360 degree"☆1,958Updated last year
- An all in one solution for adding Temporal Stability to a Stable Diffusion Render via an automatic1111 extension☆1,966Updated last year
- ☆1,802Updated last month
- ☆1,017Updated last year
- Image to prompt with BLIP and CLIP☆2,894Updated last year
- 本项目基于SadTalkers实现视频唇形合成的Wav2lip。通过以视频文件方式进行语音驱动生成唇形,设置面部区域可配置的增强方式进行合成唇形(人脸)区域画面增强,提高生成唇形的清晰度。使用DAIN 插帧的DL算法对生成视频进行补帧,补充帧间合成唇形的动作过渡,使合成的唇…☆1,977Updated 2 years ago
- A new one shot face swap approach for image and video domains☆1,486Updated 6 months ago
- The source code of "DINet: deformation inpainting network for realistic face visually dubbing on high resolution video."☆1,091Updated last year
- Real-time Neural Radiance Talking Portrait Synthesis via Audio-spatial Decomposition☆921Updated last year
- Auto1111 extension implementing text2video diffusion models (like ModelScope or VideoCrafter) using only Auto1111 webui dependencies☆1,321Updated last year
- FaceXlib aims at providing ready-to-use face-related functions based on current STOA open-source methods.☆949Updated last year
- Official implementation of "DreamPose: Fashion Image-to-Video Synthesis via Stable Diffusion"☆1,008Updated last year
- 📖 A curated list of resources dedicated to talking face.☆1,513Updated 8 months ago
- "Effective Whole-body Pose Estimation with Two-stages Distillation" (ICCV 2023, CV4Metaverse Workshop)☆2,548Updated last year
- Official Pytorch Implementation for "Text2LIVE: Text-Driven Layered Image and Video Editing" (ECCV 2022 Oral)☆892Updated 2 years ago
- T2I-Adapter☆3,739Updated last year
- AnimateDiff for AUTOMATIC1111 Stable Diffusion WebUI☆3,371Updated 11 months ago
- Extension of Wav2Lip repository for processing high-quality videos.☆540Updated 2 years ago
- This is the Mov2mov plugin for Automatic1111/stable-diffusion-webui.☆2,203Updated 7 months ago
- Official implementation of AnimateDiff.☆11,758Updated last year
- ☆1,903Updated last month
- An arbitrary face-swapping framework on images and videos with one single trained model!☆5,027Updated last year