Andreas-UI / ME-GraphAU-VideoLinks
ME-GraphAU on Video
☆11Updated last year
Alternatives and similar repositories for ME-GraphAU-Video
Users that are interested in ME-GraphAU-Video are comparing it to the libraries listed below
Sorting:
- Official pytorch implementation for Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion (CVPR 2022)☆124Updated last year
- [ICCV 2021] The official repo for the paper "Speech Drives Templates: Co-Speech Gesture Synthesis with Learned Templates".☆97Updated 2 years ago
- CVPR 2022: Cross-Modal Perceptionist: Can Face Geometry be Gleaned from Voices?☆129Updated 11 months ago
- ☆104Updated 2 years ago
- Official Pytorch Implementation of SPECTRE: Visual Speech-Aware Perceptual 3D Facial Expression Reconstruction from Videos☆288Updated 8 months ago
- MEAD: A Large-scale Audio-visual Dataset for Emotional Talking-face Generation [ECCV2020]☆285Updated last year
- [CVPR 2022] Code for "Learning Hierarchical Cross-Modal Association for Co-Speech Gesture Generation"☆143Updated 2 years ago
- the dataset and code for "Flow-guided One-shot Talking Face Generation with a High-resolution Audio-visual Dataset"☆105Updated last year
- EmoStyle project page☆46Updated 3 months ago
- ☆173Updated last year
- Code for "Audio-Driven Co-Speech Gesture Video Generation" (NeurIPS 2022, Spotlight Presentation).☆87Updated 3 years ago
- DiffSpeaker: Speech-Driven 3D Facial Animation with Diffusion Transformer☆164Updated last year
- ☆21Updated last year
- 4D Facial Expression Diffusion Model☆72Updated last year
- The official pytorch code for TalkingStyle: Personalized Speech-Driven Facial Animation with Style Preservation☆28Updated last year
- Official Pytorch Implementation of SMIRK: 3D Facial Expressions through Analysis-by-Neural-Synthesis (CVPR 2024)☆336Updated last year
- Code for paper 'Audio-Driven Emotional Video Portraits'.☆312Updated 3 years ago
- AgentAvatar: Disentangling Planning, Driving and Rendering for Photorealistic Avatar Agents☆11Updated 2 years ago
- Towards robust facial action units detection☆24Updated last year
- This dataset contains 3D reconstructions of the MEAD dataset.☆18Updated 2 years ago
- Official code release of "DEEPTalk: Dynamic Emotion Embedding for Probabilistic Speech-Driven 3D Face Animation" [AAAI2025]☆59Updated 9 months ago
- This is official inference code of PD-FGC☆97Updated 2 years ago
- [CVPR'2023] Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation☆257Updated 2 years ago
- A warping based image translation model focusing on upper body synthesis.☆36Updated 3 years ago
- Code for paper 'EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model'☆199Updated 2 years ago
- A novel apporach for personalized speech-driven 3D facial animation☆55Updated last year
- This repository contains scripts to build Youtube Gesture Dataset.☆129Updated 2 years ago
- PyTorch implementation of "Towards Accurate Facial Motion Retargeting with Identity-Consistent and Expression-Exclusive Constraints" (AAA…☆98Updated 3 years ago
- [CVPR'24] DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven Holistic 3D Expression and Gesture Generation☆191Updated last year
- [ICCV-2023] The official repo for the paper "LivelySpeaker: Towards Semantic-aware Co-Speech Gesture Generation".☆86Updated last year