joannahong / Lip2Wav-pytorchLinks
a PyTorch implementation of Lip2Wav
☆51Updated 2 years ago
Alternatives and similar repositories for Lip2Wav-pytorch
Users that are interested in Lip2Wav-pytorch are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of "Lip to Speech Synthesis with Visual Context Attentional GAN" (NeurIPS2021)☆25Updated last year
- A pipeline to read lips and generate speech for the read content, i.e Lip to Speech Synthesis.☆86Updated 3 years ago
- Official PyTorch implementation of paper Leveraging Unimodal Self Supervised Learning for Multimodal Audio-Visual Speech Recognition (ACL…☆65Updated 3 years ago
- ☆56Updated 2 years ago
- ☆45Updated 2 years ago
- The speaker-labeled information of LRW dataset, which is the outcome of the paper "Speaker-adaptive Lip Reading with User-dependent Paddi…☆10Updated last year
- Tools for downloading VoxCeleb2 dataset☆30Updated last year
- Official implementation of RAVEn (ICLR 2023) and BRAVEn (ICASSP 2024)☆67Updated 4 months ago
- An unofficial (PyTorch) implementation for the paper Deep Lip Reading: A comparison of models and an online application.☆10Updated 5 years ago
- PyTorch implementation of "Lip to Speech Synthesis in the Wild with Multi-task Learning" (ICASSP2023)☆69Updated last year
- INTERSPEECH2023: Target Active Speaker Detection with Audio-visual Cues☆52Updated 2 years ago
- Look Who’s Talking: Active Speaker Detection in the Wild☆72Updated last year
- Pytorch implementation for “V2C: Visual Voice Cloning”☆32Updated 2 years ago
- ☆23Updated last year
- ☆17Updated 7 months ago
- This is the implementation of the paper "Emotion Intensity and its Control for Emotional Voice Conversion".☆92Updated 3 years ago
- [CVPR 2023] Official code for paper: Learning to Dub Movies via Hierarchical Prosody Models.☆106Updated last year
- CVC: Contrastive Learning for Non-parallel Voice Conversion (INTERSPEECH 2021, in PyTorch)☆57Updated 2 years ago
- [INTERSPEECH 2022] This dataset is designed for multi-modal speaker diarization and lip-speech synchronization in the wild.☆52Updated last year
- This is the code for controllable EVC framework for seen and unseen emotion generation.☆44Updated 3 years ago
- Audio-Visual Corruption Modeling of our paper "Watch or Listen: Robust Audio-Visual Speech Recognition with Visual Corruption Modeling an…☆34Updated 2 years ago
- [Interspeech 2023] Intelligible Lip-to-Speech Synthesis with Speech Units☆40Updated 8 months ago
- Official repository for the paper VocaLiST: An Audio-Visual Synchronisation Model for Lips and Voices☆67Updated last year
- Disentangled Speech Embeddings using Cross-Modal Self-Supervision☆160Updated 5 years ago
- Source code for "Sparse in Space and Time: Audio-visual Synchronisation with Trainable Selectors." (Spotlight at the BMVC 2022)☆51Updated last year
- [INTERSPEECH'2022] Accurate Emotion Strength Assessment for Seen and Unseen Speech Based on Data-Driven Deep Learning☆82Updated 2 years ago
- This is the official implementation of the paper AGAIN-VC: A One-shot Voice Conversion using Activation Guidance and Adaptive Instance No…☆115Updated 4 years ago
- Official implementation of SpeechSplit2☆133Updated 2 years ago
- Project page for "Improving Few-shot Learning for Talking Face System with TTS Data Augmentation" for ICASSP2023☆86Updated last year
- [CVPR 2024] AV2AV: Direct Audio-Visual Speech to Audio-Visual Speech Translation with Unified Audio-Visual Speech Representation☆37Updated 10 months ago