SpringHuo / MAVDLinks
The MAVD represents Mandarin Audio-Visual dataset with Depth information. MAVD has a rich variety of modal data, including audio, RGB images and depth images, etc.
☆18Updated last year
Alternatives and similar repositories for MAVD
Users that are interested in MAVD are comparing it to the libraries listed below
Sorting:
- ☆13Updated 4 months ago
- A novel apporach for personalized speech-driven 3D facial animation☆51Updated last year
- ☆23Updated 2 years ago
- Project page for "Improving Few-shot Learning for Talking Face System with TTS Data Augmentation" for ICASSP2023☆86Updated last year
- Talking Head from Speech Audio using a Pre-trained Image Generator☆23Updated last year
- [AAAI 2025] VQTalker: Towards Multilingual Talking Avatars through Facial Motion Tokenization☆50Updated 7 months ago
- Official repository for the paper VocaLiST: An Audio-Visual Synchronisation Model for Lips and Voices☆67Updated last year
- Official repository for the paper Multimodal Transformer Distillation for Audio-Visual Synchronization (ICASSP 2024).☆25Updated last year
- Source code for: Expressive Speech-driven Facial Animation with controllable emotions☆38Updated last year
- ☆101Updated last year
- ☆27Updated 2 weeks ago
- Official code release of "DEEPTalk: Dynamic Emotion Embedding for Probabilistic Speech-Driven 3D Face Animation" [AAAI2025]☆47Updated 5 months ago
- PyTorch implementation of "Lip to Speech Synthesis in the Wild with Multi-task Learning" (ICASSP2023)☆69Updated last year
- [ICCV2023] Speech2Lip: High-fidelity Speech to Lip Generation by Learning from a Short Video☆73Updated last year
- This is the repository for EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation☆127Updated 3 weeks ago
- [AAAI 2024] stle2talker - Official PyTorch Implementation☆43Updated last year
- ☆100Updated 2 years ago
- [ICIAP 2023] Learning Landmarks Motion from Speech for Speaker-Agnostic 3D Talking Heads Generation☆62Updated last year
- This is official inference code of PD-FGC☆89Updated last year
- ☆43Updated last week
- This is the official source for our ACM MM 2023 paper "SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking …☆137Updated last year
- Unoffical LivePortrait Training Script [ 🚧 Under Construction]☆33Updated 5 months ago
- the dataset and code for "Flow-guided One-shot Talking Face Generation with a High-resolution Audio-visual Dataset"☆101Updated last year
- [INTERSPEECH'24] Official repository for "Enhancing Speech-Driven 3D Facial Animation with Audio-Visual Guidance from Lip Reading Expert"☆16Updated 3 weeks ago
- [ICASSP'25] DEGSTalk: Decomposed Per-Embedding Gaussian Fields for Hair-Preserving Talking Face Synthesis☆48Updated 6 months ago
- Data and Pytorch implementation of IEEE TMM "EmotionGesture: Audio-Driven Diverse Emotional Co-Speech 3D Gesture Generation"☆26Updated last year
- Preprocessing Scipts for Talking Face Generation☆90Updated 5 months ago
- Official project repo for paper "Speech Driven Video Editing via an Audio-Conditioned Diffusion Model"☆229Updated 2 years ago
- DiffSpeaker: Speech-Driven 3D Facial Animation with Diffusion Transformer☆161Updated last year
- ☆73Updated 2 years ago