haonanhe / MEAD-3D
This dataset contains 3D reconstructions of the MEAD dataset.
☆14Updated last year
Alternatives and similar repositories for MEAD-3D:
Users that are interested in MEAD-3D are comparing it to the libraries listed below
- ☆50Updated last year
- A novel apporach for personalized speech-driven 3D facial animation☆45Updated 9 months ago
- Official implentation of SingingHead: A Large-scale 4D Dataset for Singing Head Animation.☆55Updated 4 months ago
- ☆42Updated last year
- Data and Pytorch implementation of IEEE TMM "EmotionGesture: Audio-Driven Diverse Emotional Co-Speech 3D Gesture Generation"☆23Updated 10 months ago
- [INTERSPEECH'24] Official repository for "MultiTalk: Enhancing 3D Talking Head Generation Across Languages with Multilingual Video Datase…☆86Updated 3 months ago
- This is the repository for EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation☆123Updated last year
- Official code release of "DEEPTalk: Dynamic Emotion Embedding for Probabilistic Speech-Driven 3D Face Animation" [AAAI2025]☆31Updated this week
- ☆74Updated last year
- This is official inference code of PD-FGC☆84Updated last year
- This is the official source for our ACM MM 2023 paper "SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking …☆136Updated last year
- [CVPR 2022] Code for "Learning Hierarchical Cross-Modal Association for Co-Speech Gesture Generation"☆134Updated last year
- This is the official repository for DiffSpeaker: Speech-Driven 3D Facial Animation with Diffusion Transformer☆154Updated 10 months ago
- ☆18Updated last month
- Source code for: Expressive Speech-driven Facial Animation with controllable emotions☆35Updated last year
- ☆35Updated last year
- UnifiedGesture: A Unified Gesture Synthesis Model for Multiple Skeletons (ACM MM 2023 Oral)☆52Updated last year
- The official pytorch code for TalkingStyle: Personalized Speech-Driven Facial Animation with Style Preservation☆17Updated 7 months ago
- ☆98Updated last year
- 4D Facial Expression Diffusion Model☆68Updated last year
- Code for CVPR 2024 paper: ConvoFusion: Multi-Modal Conversational Diffusion for Co-Speech Gesture Synthesis☆27Updated last month
- QPGesture: Quantization-Based and Phase-Guided Motion Matching for Natural Speech-Driven Gesture Generation (CVPR 2023 Highlight)☆84Updated last year
- Official pytorch implementation for Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion (CVPR 2022)☆110Updated 5 months ago
- ☆98Updated last year
- FLAME head tracker for single image or multi-view-image reconstruction and video-based tracking.☆47Updated last month
- ☆32Updated 7 months ago
- ☆73Updated last week
- ☆25Updated last year
- This is the official implementation for IVA'20 Best Paper Award paper "Let's Face It: Probabilistic Multi-modal Interlocutor-aware Gener…☆16Updated 2 years ago