GalaxyCong / EmoDubberLinks
Official source codes for the paper: EmoDubber: Towards High Quality and Emotion Controllable Movie Dubbing.
☆17Updated this week
Alternatives and similar repositories for EmoDubber
Users that are interested in EmoDubber are comparing it to the libraries listed below
Sorting:
- The official implementation of V-AURA: Temporally Aligned Audio for Video with Autoregression (ICASSP 2025)☆27Updated 5 months ago
- [AAAI 2024] V2A-Mapper: A Lightweight Solution for Vision-to-Audio Generation by Connecting Foundation Models☆25Updated last year
- [ACL 2024] This is the Pytorch code for our paper "StyleDubber: Towards Multi-Scale Style Learning for Movie Dubbing"☆83Updated 6 months ago
- Implementation of Frieren: Efficient Video-to-Audio Generation Network with Rectified Flow Matching (NeurIPS'24)☆40Updated 2 months ago
- [CVPR 2024] AV2AV: Direct Audio-Visual Speech to Audio-Visual Speech Translation with Unified Audio-Visual Speech Representation☆36Updated 9 months ago
- ☆36Updated 2 months ago
- [Interspeech 2023] Intelligible Lip-to-Speech Synthesis with Speech Units☆39Updated 7 months ago
- Emotion Rendering for Conversational Speech Synthesis with Heterogeneous Graph-Based Context Modeling (Accepted by AAAI'2024)☆56Updated 11 months ago
- The official repo for Both Ears Wide Open: Towards Language-Driven Spatial Audio Generation☆38Updated 3 weeks ago
- Pytorch implementation for “V2C: Visual Voice Cloning”☆32Updated 2 years ago
- [AAAI 2025] VQTalker: Towards Multilingual Talking Avatars through Facial Motion Tokenization☆49Updated 5 months ago
- [CVPR 2023] Official code for paper: Learning to Dub Movies via Hierarchical Prosody Models.☆106Updated 11 months ago
- Source code for "Synchformer: Efficient Synchronization from Sparse Cues" (ICASSP 2024)☆62Updated 4 months ago
- ☆55Updated 2 years ago
- The official repository of SpeechCraft dataset, a large-scale expressive bilingual speech dataset with natural language descriptions.☆133Updated last month
- ☆52Updated 7 months ago
- UMETTS: A Unified Framework for Emotional Text-to-Speech Synthesis with Multimodal Prompts☆30Updated 5 months ago
- VoxInstruct: Expressive Human Instruction-to-Speech Generation with Unified Multilingual Codec Language Modelling☆79Updated 6 months ago
- official code for CVPR'24 paper Diff-BGM☆63Updated 7 months ago
- Official repository for the paper Multimodal Transformer Distillation for Audio-Visual Synchronization (ICASSP 2024).☆24Updated last year
- Official Repository of IJCAI 2024 Paper: "BATON: Aligning Text-to-Audio Model with Human Preference Feedback"☆27Updated 3 months ago
- Generative Expressive Conversational Speech Synthesis (Accepted by MM'2024)☆71Updated 7 months ago
- The MAVD represents Mandarin Audio-Visual dataset with Depth information. MAVD has a rich variety of modal data, including audio, RGB ima…☆17Updated last year
- Make-An-Audio-3: Transforming Text/Video into Audio via Flow-based Large Diffusion Transformers☆98Updated 2 weeks ago
- Implementation of Multi-Source Music Generation with Latent Diffusion.☆24Updated 8 months ago
- Project page for "Improving Few-shot Learning for Talking Face System with TTS Data Augmentation" for ICASSP2023☆86Updated last year
- LUCY: Linguistic Understanding and Control Yielding Early Stage of Her☆41Updated last month
- Generative Expressive Conversational Speech Synthesis (Accepted by MM'2024)☆59Updated 7 months ago
- Code for Audio-Visual Target Speaker Extraction with Selective Auditory Attention (TASLP)☆19Updated 3 months ago
- Official PyTorch implementation of "Conditional Generation of Audio from Video via Foley Analogies".☆86Updated last year