thuhcsi / S2G-MDDiffusionLinks
☆116Updated last year
Alternatives and similar repositories for S2G-MDDiffusion
Users that are interested in S2G-MDDiffusion are comparing it to the libraries listed below
Sorting:
- [CVPR'24] DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven Holistic 3D Expression and Gesture Generation☆169Updated last year
- 【Accepted by TPAMI】Human Motion Video Generation: A Survey (https://www.techrxiv.org/users/836049/articles/1228135-human-motion-video-gen…☆218Updated this week
- This is official inference code of PD-FGC☆93Updated last year
- ☆56Updated 2 weeks ago
- ☆198Updated 10 months ago
- DiffSpeaker: Speech-Driven 3D Facial Animation with Diffusion Transformer☆162Updated last year
- [ECCV 2024] Dyadic Interaction Modeling for Social Behavior Generation☆58Updated 3 months ago
- Using Claude Opus to reverse engineer code from MegaPortraits: One-shot Megapixel Neural Head Avatars☆93Updated 8 months ago
- This is the official source for our ACM MM 2023 paper "SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking …☆137Updated last year
- Latent Diffusion Transformer for Talking Video Synthesis☆59Updated 8 months ago
- [ICASSP'25] DEGSTalk: Decomposed Per-Embedding Gaussian Fields for Hair-Preserving Talking Face Synthesis☆48Updated 6 months ago
- ☆91Updated last week
- ☆44Updated 3 weeks ago
- ☆161Updated 2 years ago
- [CVPR 2024] FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models☆225Updated last year
- ☆20Updated 9 months ago
- Official code for ICCV 2023 paper: "Efficient Emotional Adaptation for Audio-Driven Talking-Head Generation".☆294Updated 2 months ago
- [CVPR'25] InsTaG: Learning Personalized 3D Talking Head from Few-Second Video☆134Updated 2 weeks ago
- [NeurlPS-2024] The official code of MambaTalk: Efficient Holistic Gesture Synthesis with Selective State Space Models☆64Updated last week
- [ICCV-2023] The official repo for the paper "LivelySpeaker: Towards Semantic-aware Co-Speech Gesture Generation".☆85Updated last year
- Towards Variable and Coordinated Holistic Co-Speech Motion Generation, CVPR 2024☆57Updated last year
- This is the repository for EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation☆128Updated last month
- Preprocessing Scipts for Talking Face Generation☆90Updated 6 months ago
- [CVPR 2024] AMUSE: Emotional Speech-driven 3D Body Animation via Disentangled Latent Diffusion☆127Updated 11 months ago
- the dataset and code for "Flow-guided One-shot Talking Face Generation with a High-resolution Audio-visual Dataset"☆102Updated last year
- This is the official repository for TalkSHOW: Generating Holistic 3D Human Motion from Speech [CVPR2023].☆346Updated last year
- Official implementation of the CVPR 2024 paper "FSRT: Facial Scene Representation Transformer for Face Reenactment from Factorized Appear…☆113Updated 10 months ago
- [WACV 2024] "CVTHead: One-shot Controllable Head Avatar with Vertex-feature Transformer"☆124Updated last year
- Official code release of "DEEPTalk: Dynamic Emotion Embedding for Probabilistic Speech-Driven 3D Face Animation" [AAAI2025]☆48Updated 5 months ago
- Evaluation code for "Efficient Emotional Adaptation for Audio-Driven Talking-Head Generation"☆17Updated last year