GalaxyCong / EmoDubberLinks
Official source codes for the paper: EmoDubber: Towards High Quality and Emotion Controllable Movie Dubbing.
☆32Updated 6 months ago
Alternatives and similar repositories for EmoDubber
Users that are interested in EmoDubber are comparing it to the libraries listed below
Sorting:
- [CVPR 2023] Official code for paper: Learning to Dub Movies via Hierarchical Prosody Models.☆110Updated last year
- [CVPR 2024] AV2AV: Direct Audio-Visual Speech to Audio-Visual Speech Translation with Unified Audio-Visual Speech Representation☆43Updated last year
- [ACL 2024] This is the Pytorch code for our paper "StyleDubber: Towards Multi-Scale Style Learning for Movie Dubbing"☆94Updated last year
- Pytorch implementation for “V2C: Visual Voice Cloning”☆32Updated 2 years ago
- ☆41Updated 8 months ago
- [Interspeech 2023] Intelligible Lip-to-Speech Synthesis with Speech Units☆47Updated last year
- [AAAI 2024] V2A-Mapper: A Lightweight Solution for Vision-to-Audio Generation by Connecting Foundation Models☆26Updated 2 years ago
- The official implementation of V-AURA: Temporally Aligned Audio for Video with Autoregression (ICASSP 2025) (Oral)☆31Updated 11 months ago
- ☆62Updated 5 months ago
- ☆44Updated 8 months ago
- official code for CVPR'24 paper Diff-BGM☆72Updated last year
- Official Repository of IJCAI 2024 Paper: "BATON: Aligning Text-to-Audio Model with Human Preference Feedback"☆31Updated 9 months ago
- Make-An-Audio-3: Transforming Text/Video into Audio via Flow-based Large Diffusion Transformers☆113Updated 6 months ago
- Official PyTorch implementation of "Conditional Generation of Audio from Video via Foley Analogies".☆91Updated 2 years ago
- ☆48Updated 3 years ago
- Audio-Visual Corruption Modeling of our paper "Watch or Listen: Robust Audio-Visual Speech Recognition with Visual Corruption Modeling an…☆35Updated 2 years ago
- INTERSPEECH2023: Target Active Speaker Detection with Audio-visual Cues☆55Updated 2 years ago
- Source code for "Synchformer: Efficient Synchronization from Sparse Cues" (ICASSP 2024)☆97Updated 2 months ago
- ☆59Updated 2 years ago
- ☆23Updated last year
- Official code for "EmoVoice: LLM-based Emotional Text-To-Speech Model with Freestyle Text Prompting"☆101Updated last month
- [ASRU 2025] Omni-R1: Do You Really Need Audio to Fine-Tune Your Audio LLM?☆36Updated 3 weeks ago
- Implementation of Frieren: Efficient Video-to-Audio Generation Network with Rectified Flow Matching (NeurIPS'24)☆55Updated 8 months ago
- LUCY: Linguistic Understanding and Control Yielding Early Stage of Her☆56Updated 8 months ago
- ☆24Updated last year
- A Multi-Task Evaluation Benchmark for Audio-Visual Representation Models (ICASSP 2024)☆58Updated last year
- Emotion Rendering for Conversational Speech Synthesis with Heterogeneous Graph-Based Context Modeling (Accepted by AAAI'2024)☆59Updated last year
- [NeurIPS 2024] Code, Dataset, Samples for the VATT paper “ Tell What You Hear From What You See - Video to Audio Generation Through Text”☆34Updated 4 months ago
- Ego4DSounds: A diverse egocentric dataset with high action-audio correspondence☆18Updated last year
- ☆61Updated 5 months ago