GalaxyCong / EmoDubberLinks
Official source codes for the paper: EmoDubber: Towards High Quality and Emotion Controllable Movie Dubbing.
☆18Updated 3 weeks ago
Alternatives and similar repositories for EmoDubber
Users that are interested in EmoDubber are comparing it to the libraries listed below
Sorting:
- [CVPR 2024] AV2AV: Direct Audio-Visual Speech to Audio-Visual Speech Translation with Unified Audio-Visual Speech Representation☆37Updated 9 months ago
- [Interspeech 2023] Intelligible Lip-to-Speech Synthesis with Speech Units☆40Updated 8 months ago
- ☆37Updated 2 months ago
- ☆55Updated 2 years ago
- Emotion Rendering for Conversational Speech Synthesis with Heterogeneous Graph-Based Context Modeling (Accepted by AAAI'2024)☆57Updated last year
- [ACL 2024] This is the Pytorch code for our paper "StyleDubber: Towards Multi-Scale Style Learning for Movie Dubbing"☆85Updated 7 months ago
- Implementation of Frieren: Efficient Video-to-Audio Generation Network with Rectified Flow Matching (NeurIPS'24)☆43Updated 2 months ago
- Project page for "Improving Few-shot Learning for Talking Face System with TTS Data Augmentation" for ICASSP2023☆86Updated last year
- Official repository for the paper Multimodal Transformer Distillation for Audio-Visual Synchronization (ICASSP 2024).☆24Updated last year
- The official repo for Both Ears Wide Open: Towards Language-Driven Spatial Audio Generation☆40Updated last month
- [AAAI 2025] VQTalker: Towards Multilingual Talking Avatars through Facial Motion Tokenization☆49Updated 6 months ago
- [CVPR 2023] Official code for paper: Learning to Dub Movies via Hierarchical Prosody Models.☆106Updated last year
- Source code for "Synchformer: Efficient Synchronization from Sparse Cues" (ICASSP 2024)☆64Updated 4 months ago
- Pytorch implementation for “V2C: Visual Voice Cloning”☆32Updated 2 years ago
- The official implementation of V-AURA: Temporally Aligned Audio for Video with Autoregression (ICASSP 2025)☆27Updated 5 months ago
- [AAAI 2024] V2A-Mapper: A Lightweight Solution for Vision-to-Audio Generation by Connecting Foundation Models☆25Updated last year
- ☆54Updated 8 months ago
- UMETTS: A Unified Framework for Emotional Text-to-Speech Synthesis with Multimodal Prompts☆31Updated 2 weeks ago
- Kling-Foley: Multimodal Diffusion Transformer for High-Quality Video-to-Audio Generation☆29Updated this week
- ☆42Updated 5 months ago
- The official repository of SpeechCraft dataset, a large-scale expressive bilingual speech dataset with natural language descriptions.☆154Updated 2 months ago
- Implementation of Multi-Source Music Generation with Latent Diffusion.☆24Updated 9 months ago
- VoxInstruct: Expressive Human Instruction-to-Speech Generation with Unified Multilingual Codec Language Modelling☆82Updated 7 months ago
- [Official Implementation] Acoustic Autoregressive Modeling 🔥☆70Updated 10 months ago
- official code for CVPR'24 paper Diff-BGM☆64Updated 8 months ago
- Generative Expressive Conversational Speech Synthesis (Accepted by MM'2024)☆59Updated 7 months ago
- Official Repository of IJCAI 2024 Paper: "BATON: Aligning Text-to-Audio Model with Human Preference Feedback"☆28Updated 3 months ago
- Make-An-Audio-3: Transforming Text/Video into Audio via Flow-based Large Diffusion Transformers☆100Updated last month
- Official code for "EmoVoice: LLM-based Emotional Text-To-Speech Model with Freestyle Text Prompting"☆46Updated last month
- ☆61Updated 2 weeks ago