SpringHuo / MAVD
The MAVD represents Mandarin Audio-Visual dataset with Depth information. MAVD has a rich variety of modal data, including audio, RGB images and depth images, etc.
☆17Updated 11 months ago
Alternatives and similar repositories for MAVD:
Users that are interested in MAVD are comparing it to the libraries listed below
- Talking Head from Speech Audio using a Pre-trained Image Generator☆23Updated 10 months ago
- ☆22Updated last year
- A novel apporach for personalized speech-driven 3D facial animation☆47Updated 11 months ago
- Official repository for the paper VocaLiST: An Audio-Visual Synchronisation Model for Lips and Voices☆64Updated 11 months ago
- Project page for "Improving Few-shot Learning for Talking Face System with TTS Data Augmentation" for ICASSP2023☆85Updated last year
- Official repository for the paper Multimodal Transformer Distillation for Audio-Visual Synchronization (ICASSP 2024).☆24Updated 11 months ago
- This is official inference code of PD-FGC☆84Updated last year
- [AAAI 2025] VQTalker: Towards Multilingual Talking Avatars through Facial Motion Tokenization☆47Updated 3 months ago
- [INTERSPEECH'24] Official repository for "MultiTalk: Enhancing 3D Talking Head Generation Across Languages with Multilingual Video Datase…☆92Updated 4 months ago
- ☆23Updated 2 weeks ago
- Official source codes for the paper: EmoDubber: Towards High Quality and Emotion Controllable Movie Dubbing.☆11Updated 2 months ago
- [AAAI 2024] stle2talker - Official PyTorch Implementation☆38Updated last year
- NeurIPS 2022☆38Updated 2 years ago
- Source code for: Expressive Speech-driven Facial Animation with controllable emotions☆37Updated last year
- [ICCV2023] Speech2Lip: High-fidelity Speech to Lip Generation by Learning from a Short Video☆67Updated last year
- ☆74Updated last year
- Unoffical LivePortrait Training Script [ 🚧 Under Construction]☆27Updated 2 months ago
- Data and Pytorch implementation of IEEE TMM "EmotionGesture: Audio-Driven Diverse Emotional Co-Speech 3D Gesture Generation"☆24Updated last year
- ☆97Updated last year
- ☆97Updated 8 months ago
- Evaluation code for "Efficient Emotional Adaptation for Audio-Driven Talking-Head Generation"☆16Updated last year
- Code for "SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking Faces" ACM MM 2023☆30Updated last year
- ☆16Updated 6 months ago
- ☆100Updated last year
- [ICIAP 2023] Learning Landmarks Motion from Speech for Speaker-Agnostic 3D Talking Heads Generation☆62Updated last year
- Official implentation of SingingHead: A Large-scale 4D Dataset for Singing Head Animation.☆56Updated 5 months ago
- the dataset and code for "Flow-guided One-shot Talking Face Generation with a High-resolution Audio-visual Dataset"☆96Updated 10 months ago
- SyncTalkFace: Talking Face Generation for Precise Lip-syncing via Audio-Lip Memory☆33Updated 2 years ago
- Official code release of "DEEPTalk: Dynamic Emotion Embedding for Probabilistic Speech-Driven 3D Face Animation" [AAAI2025]☆35Updated last month
- [ICASSP'25] DEGSTalk: Decomposed Per-Embedding Gaussian Fields for Hair-Preserving Talking Face Synthesis☆41Updated 2 months ago