de-id / diffusers-papersLinks
Diffusion Models papers
☆20Updated last year
Alternatives and similar repositories for diffusers-papers
Users that are interested in diffusers-papers are comparing it to the libraries listed below
Sorting:
- This seminar will focus on the latest developments in the field of diffusion models, particularly video diffusion models. Topics will inc…☆14Updated 7 months ago
- Use D-ID's live streaming API to stream a talking presenter☆198Updated last week
- [ECCV 2022] Official PyTorch implementation of the paper - Graph Neural Network for Cell Tracking in Microscopy Videos☆68Updated 2 years ago
- PyTorch Implementation for Paper "Emotionally Enhanced Talking Face Generation" (ICCVW'23 and ACM-MMW'23)☆367Updated 4 months ago
- [ECCV 2022] StyleHEAT: A framework for high-resolution editable talking face generation☆651Updated 2 years ago
- Avatar Generation For Characters and Game Assets Using Deep Fakes☆220Updated 9 months ago
- The official code of our ICCV2023 work: Implicit Identity Representation Conditioned Memory Compensation Network for Talking Head video G…☆252Updated last year
- lipsync is a simple and updated Python library for lip synchronization, based on Wav2Lip. It synchronizes lips in videos and images based…☆124Updated 4 months ago
- Official PyTorch implementation of "Neural Head Avatars from Monocular RGB Videos"☆553Updated 2 years ago
- Pytorch official implementation for our paper "HyperLips: Hyper Control Lips with High Resolution Decoder for Talking Face Generation".☆208Updated last year
- ☆32Updated 3 months ago
- Summary of publicly available ressources such as code, datasets, and scientific papers for the FLAME 3D head model☆526Updated last week
- [CVPR2023] OTAvatar: One-shot Talking Face Avatar with Controllable Tri-plane Rendering.☆323Updated last year
- ☆123Updated last year
- ☆11Updated 2 years ago
- DiffPoseTalk: Speech-Driven Stylistic 3D Facial Animation and Head Pose Generation via Diffusion Models☆290Updated 2 months ago
- Speech to Facial Animation using GANs☆40Updated 3 years ago
- ☆52Updated last year
- Wav2Lip-Emotion extends Wav2Lip to modify facial expressions of emotions via L1 reconstruction and pre-trained emotion objectives. We als…☆96Updated 3 years ago
- ☆162Updated last year
- [CVPR2023] The implementation for "DiffTalk: Crafting Diffusion Models for Generalized Audio-Driven Portraits Animation"☆465Updated 10 months ago
- [CVPR 2024] FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models☆221Updated last year
- Pytorch implementation of paper "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing"☆837Updated 3 years ago
- Alternative to Flawless AI's TrueSync. Make lips in video match provided audio using the power of Wav2Lip and GFPGAN.☆124Updated 10 months ago
- Code of SIGGRAPH 2023 Conference paper: StyleAvatar: Real-time Photo-realistic Portrait Avatar from a Single Video☆468Updated last year
- The pytorch implementation of our WACV23 paper "Cross-identity Video Motion Retargeting with Joint Transformation and Synthesis".☆148Updated last year
- Code for paper 'Audio-Driven Emotional Video Portraits'.☆307Updated 3 years ago
- Updated fork of wav2lip-hq allowing for the use of current ESRGAN models☆54Updated last year
- papers about Face Reenactment/Talking Face Generation☆451Updated last year
- 3D face model that can generate high-quality mesh and texture☆283Updated 10 months ago