KAIST-AILab / SyncVSRLinks
[Interspeech 2024] SyncVSR: Data-Efficient Visual Speech Recognition with End-to-End Crossmodal Audio Token Synchronization
☆54Updated 3 months ago
Alternatives and similar repositories for SyncVSR
Users that are interested in SyncVSR are comparing it to the libraries listed below
Sorting:
- [Interspeech 2023] Intelligible Lip-to-Speech Synthesis with Speech Units☆40Updated 8 months ago
- PyTorch implementation of "Lip to Speech Synthesis with Visual Context Attentional GAN" (NeurIPS2021)☆24Updated last year
- PyTorch implementation of "Lip to Speech Synthesis in the Wild with Multi-task Learning" (ICASSP2023)☆69Updated last year
- [INTERSPEECH 2022] This dataset is designed for multi-modal speaker diarization and lip-speech synchronization in the wild.☆52Updated last year
- ☆55Updated 2 years ago
- Official implementation of RAVEn (ICLR 2023) and BRAVEn (ICASSP 2024)☆66Updated 4 months ago
- Official implementation of USR (NeurIPS 2024)☆30Updated 6 months ago
- [CVPR 2024] AV2AV: Direct Audio-Visual Speech to Audio-Visual Speech Translation with Unified Audio-Visual Speech Representation☆37Updated 9 months ago
- A pipeline to read lips and generate speech for the read content, i.e Lip to Speech Synthesis.☆85Updated 3 years ago
- Whisper-Flamingo [Interspeech 2024] and mWhisper-Flamingo [IEEE SPL 2025] for Audio-Visual Speech Recognition and Translation☆169Updated last month
- Zero-Shot Emotion Style Transfer☆47Updated 2 months ago
- ViSpeR: Multilingual Audio-Visual Speech Recognition☆39Updated 2 months ago
- Official repository for the paper Multimodal Transformer Distillation for Audio-Visual Synchronization (ICASSP 2024).☆24Updated last year
- Official PyTorch implementation for "MMS-LLaMA: Efficient LLM-based Audio-Visual Speech Recognition with Minimal Multimodal Speech Tokens…☆31Updated 2 weeks ago
- BLSP-Emo: Towards Empathetic Large Speech-Language Models☆46Updated last year
- Pytorch implementation for “V2C: Visual Voice Cloning”☆32Updated 2 years ago
- ☆33Updated 2 months ago
- ☆45Updated 2 years ago
- a PyTorch implementation of Lip2Wav☆50Updated 2 years ago
- Official source codes for the paper: EmoDubber: Towards High Quality and Emotion Controllable Movie Dubbing.☆18Updated 3 weeks ago
- [ACL 2024] This is the Pytorch code for our paper "StyleDubber: Towards Multi-Scale Style Learning for Movie Dubbing"☆84Updated 7 months ago
- INTERSPEECH2023: Target Active Speaker Detection with Audio-visual Cues☆52Updated 2 years ago
- ☆66Updated 9 months ago
- Official implementation for Fast-HuBERT: An Efficient Training Framework for Self-Supervised Speech Representation Learning☆93Updated 7 months ago
- [CVPR 2023] Official code for paper: Learning to Dub Movies via Hierarchical Prosody Models.☆106Updated last year
- The speaker-labeled information of LRW dataset, which is the outcome of the paper "Speaker-adaptive Lip Reading with User-dependent Paddi…☆10Updated last year
- [TAFFC 2025] The official implementation of EmoSphere++: Emotion-Controllable Zero-Shot Text-to-Speech via Emotion-Adaptive Spherical Vec…☆94Updated 2 months ago
- ☆23Updated last year
- X-E-Speech: Joint Training Framework of Non-Autoregressive Cross-lingual Emotional Text-to-Speech and Voice Conversion☆92Updated last year
- Official release of StyleTalk dataset.☆66Updated 11 months ago