GalaxyCong / EmoDubberLinks
Official source codes for the paper: EmoDubber: Towards High Quality and Emotion Controllable Movie Dubbing.
☆25Updated 2 months ago
Alternatives and similar repositories for EmoDubber
Users that are interested in EmoDubber are comparing it to the libraries listed below
Sorting:
- [CVPR 2024] AV2AV: Direct Audio-Visual Speech to Audio-Visual Speech Translation with Unified Audio-Visual Speech Representation☆37Updated 11 months ago
- [CVPR 2023] Official code for paper: Learning to Dub Movies via Hierarchical Prosody Models.☆108Updated last year
- Pytorch implementation for “V2C: Visual Voice Cloning”☆32Updated 2 years ago
- [ACL 2024] This is the Pytorch code for our paper "StyleDubber: Towards Multi-Scale Style Learning for Movie Dubbing"☆89Updated 8 months ago
- [Interspeech 2023] Intelligible Lip-to-Speech Synthesis with Speech Units☆41Updated 9 months ago
- The official implementation of V-AURA: Temporally Aligned Audio for Video with Autoregression (ICASSP 2025)☆28Updated 7 months ago
- Source code for "Synchformer: Efficient Synchronization from Sparse Cues" (ICASSP 2024)☆79Updated 6 months ago
- official code for CVPR'24 paper Diff-BGM☆67Updated 10 months ago
- [AAAI 2024] V2A-Mapper: A Lightweight Solution for Vision-to-Audio Generation by Connecting Foundation Models☆25Updated last year
- ☆37Updated 4 months ago
- Official PyTorch implementation of "Conditional Generation of Audio from Video via Foley Analogies".☆88Updated last year
- Implementation of Frieren: Efficient Video-to-Audio Generation Network with Rectified Flow Matching (NeurIPS'24)☆48Updated 4 months ago
- This repo contains the official PyTorch implementation of AudioToken: Adaptation of Text-Conditioned Diffusion Models for Audio-to-Image …☆85Updated last year
- Source code for "Sparse in Space and Time: Audio-visual Synchronisation with Trainable Selectors." (Spotlight at the BMVC 2022)☆51Updated last year
- Diff-Foley: Synchronized Video-to-Audio Synthesis with Latent Diffusion Models☆192Updated last year
- Action2Sound: Ambient-Aware Generation of Action Sounds from Egocentric Videos☆22Updated 10 months ago
- Make-An-Audio-3: Transforming Text/Video into Audio via Flow-based Large Diffusion Transformers☆105Updated 2 months ago
- ☆57Updated 2 years ago
- [ICCV 2023] Video Background Music Generation: Dataset, Method and Evaluation☆76Updated last year
- ☆60Updated last month
- The official repo for Both Ears Wide Open: Towards Language-Driven Spatial Audio Generation☆43Updated last month
- ☆55Updated 10 months ago
- [AAAI 2025] VQTalker: Towards Multilingual Talking Avatars through Facial Motion Tokenization☆50Updated 7 months ago
- Official Repository of IJCAI 2024 Paper: "BATON: Aligning Text-to-Audio Model with Human Preference Feedback"☆29Updated 5 months ago
- ☆18Updated 2 years ago
- Ego4DSounds: A diverse egocentric dataset with high action-audio correspondence☆18Updated last year
- Project page for "Improving Few-shot Learning for Talking Face System with TTS Data Augmentation" for ICASSP2023☆86Updated last year
- Official repository for the paper Multimodal Transformer Distillation for Audio-Visual Synchronization (ICASSP 2024).☆25Updated last year
- ☆13Updated 5 months ago
- ☆87Updated 2 months ago