mpc001 / Visual_Speech_Recognition_for_Multiple_LanguagesLinks
Visual Speech Recognition for Multiple Languages
☆458Updated 2 years ago
Alternatives and similar repositories for Visual_Speech_Recognition_for_Multiple_Languages
Users that are interested in Visual_Speech_Recognition_for_Multiple_Languages are comparing it to the libraries listed below
Sorting:
- ICASSP'22 Training Strategies for Improved Lip-Reading; ICASSP'21 Towards Practical Lipreading with Distilled and Efficient Models; ICASS…☆431Updated 2 years ago
- A PyTorch implementation of the Deep Audio-Visual Speech Recognition paper.☆239Updated last year
- ACM MM 2021: 'Is Someone Speaking? Exploring Long-term Temporal Features for Audio-visual Active Speaker Detection'☆441Updated 2 years ago
- Auto-AVSR: Lip-Reading Sentences Project☆401Updated last year
- A self-supervised learning framework for audio-visual speech☆968Updated 2 years ago
- Out of time: automated lip sync in the wild☆868Updated 2 years ago
- Official Implementation of Visual Transformer Pooling for Lip reading☆39Updated 3 years ago
- The PyTorch Code and Model In "Learn an Effective Lip Reading Model without Pains", (https://arxiv.org/abs/2011.07557), which reaches the…☆165Updated 4 months ago
- Audio-Visual Speech Separation with Cross-Modal Consistency☆245Updated 2 years ago
- The repository for IEEE CVPR 2023 (A Light Weight Model for Active Speaker Detection)☆165Updated 10 months ago
- A pipeline to read lips and generate speech for the read content, i.e Lip to Speech Synthesis.☆93Updated 6 months ago
- The state-of-art PyTorch implementation of the method described in the paper "LipNet: End-to-End Sentence-level Lipreading" (https://arxi…☆235Updated 3 years ago
- Code and models for evaluating a state-of-the-art lip reading network☆197Updated 2 years ago
- Crowd Sourced Emotional Multimodal Actors Dataset (CREMA-D)☆496Updated 10 months ago
- [CVPR] MARLIN: Masked Autoencoder for facial video Representation LearnINg☆261Updated 10 months ago
- [Interspeech 2024] SyncVSR: Data-Efficient Visual Speech Recognition with End-to-End Crossmodal Audio Token Synchronization☆60Updated 10 months ago
- In defence of metric learning for speaker recognition☆1,157Updated last year
- MEAD: A Large-scale Audio-visual Dataset for Emotional Talking-face Generation [ECCV2020]☆287Updated last year
- Disentangled Speech Embeddings using Cross-Modal Self-Supervision☆166Updated 5 years ago
- [ACL 2024] Official PyTorch code for extracting features and training downstream models with emotion2vec: Self-Supervised Pre-Training fo…☆1,042Updated last year
- Phoneme Recognition using pre-trained models Wav2vec2, HuBERT and WavLM. Throughout this project, we compared specifically three differen…☆257Updated 3 years ago
- [WACV 2023] Audio-Visual Efficient Conformer (AVEC) for Robust Speech Recognition☆100Updated 2 years ago
- ☆427Updated 2 years ago
- a PyTorch implementation of Lip2Wav☆51Updated 3 years ago
- This is the GitHub page for publicly available emotional speech data.☆379Updated 4 years ago
- A collection of datasets for the purpose of emotion recognition/detection in speech.☆397Updated last year
- Official code for the paper "Visual Speech Enhancement Without A Real Visual Stream" published at WACV 2021☆108Updated last year
- PyTorch Implementation for Paper "Emotionally Enhanced Talking Face Generation" (ICCVW'23 and ACM-MMW'23)☆379Updated last year
- ☆48Updated 2 years ago
- PyTorch implementation of "Lip to Speech Synthesis in the Wild with Multi-task Learning" (ICASSP2023)☆70Updated last year