SpringHuo / MAVDLinks
The MAVD represents Mandarin Audio-Visual dataset with Depth information. MAVD has a rich variety of modal data, including audio, RGB images and depth images, etc.
☆20Updated last year
Alternatives and similar repositories for MAVD
Users that are interested in MAVD are comparing it to the libraries listed below
Sorting:
- ☆24Updated 2 years ago
- [AAAI 2025] VQTalker: Towards Multilingual Talking Avatars through Facial Motion Tokenization☆50Updated 8 months ago
- ☆14Updated 6 months ago
- A novel apporach for personalized speech-driven 3D facial animation☆52Updated last year
- Official repository for the paper VocaLiST: An Audio-Visual Synchronisation Model for Lips and Voices☆67Updated last year
- Project page for "Improving Few-shot Learning for Talking Face System with TTS Data Augmentation" for ICASSP2023☆86Updated last year
- Talking Head from Speech Audio using a Pre-trained Image Generator☆23Updated last year
- [ICCV2023] Speech2Lip: High-fidelity Speech to Lip Generation by Learning from a Short Video☆73Updated last year
- Official repository for the paper Multimodal Transformer Distillation for Audio-Visual Synchronization (ICASSP 2024).☆25Updated last year
- This is official inference code of PD-FGC☆94Updated last year
- Source code for: Expressive Speech-driven Facial Animation with controllable emotions☆39Updated last year
- This is the repository for EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation☆128Updated 2 months ago
- ☆101Updated last year
- Official project repo for paper "Speech Driven Video Editing via an Audio-Conditioned Diffusion Model"☆230Updated 2 years ago
- ☆28Updated last month
- [AAAI 2024] stle2talker - Official PyTorch Implementation☆46Updated 3 weeks ago
- Official code release of "DEEPTalk: Dynamic Emotion Embedding for Probabilistic Speech-Driven 3D Face Animation" [AAAI2025]☆50Updated 6 months ago
- This is the official source for our ACM MM 2023 paper "SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking …☆137Updated last year
- the dataset and code for "Flow-guided One-shot Talking Face Generation with a High-resolution Audio-visual Dataset"☆102Updated last year
- MEAD: A Large-scale Audio-visual Dataset for Emotional Talking-face Generation [ECCV2020]☆269Updated last year
- ☆103Updated 2 years ago
- Unoffical LivePortrait Training Script [ 🚧 Under Construction]☆34Updated 7 months ago
- PyTorch implementation of "Lip to Speech Synthesis in the Wild with Multi-task Learning" (ICASSP2023)☆71Updated last year
- [ICASSP'25] DEGSTalk: Decomposed Per-Embedding Gaussian Fields for Hair-Preserving Talking Face Synthesis☆49Updated 7 months ago
- ☆73Updated 2 years ago
- Project of "Adaptive Affine Transformation: A Simple and Effective Operation for Spatial Misaligned Image Generation"☆64Updated 2 years ago
- Code for paper 'EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model'☆197Updated 2 years ago
- ☆46Updated last month
- [ICIAP 2023] Learning Landmarks Motion from Speech for Speaker-Agnostic 3D Talking Heads Generation☆62Updated last year
- NeurIPS 2022☆38Updated 2 years ago