artem179 / WLAS
The implementation of 'Watch, Listen, Attend and Spell’ (WLAS) network that learns to transcribe videos of mouth motion to character on pytorch.
☆11Updated 7 years ago
Alternatives and similar repositories for WLAS
Users that are interested in WLAS are comparing it to the libraries listed below
Sorting:
- End to End Multiview Lip Reading☆10Updated 7 years ago
- Audio-Visual Speech Recognition using Deep Learning☆60Updated 6 years ago
- These are the results for VoiceGAN voice transformation. You can hear the audios which are in folder A-AB-ABA/B-BA-BAB☆50Updated 6 years ago
- Google Summer of Code 2017 Project: Development of Speech Recognition Module for Red Hen Lab☆46Updated 7 years ago
- An Attention Based Open-Source End to End Speech Synthesis Framework, No CNN, No RNN, No MFCC!!!☆85Updated 4 years ago
- Python toolkit for Visual Speech Recognition☆37Updated 4 years ago
- Mapping features using Deep Neural Networks (DNNs) with application to Voice Conversion (VC). The implementations are on top of Theano Py…☆33Updated 6 years ago
- A PyTorch implementation of Tacotron2, an end-to-end text-to-speech(TTS) system described in "Natural TTS Synthesis By Conditioning Waven…☆52Updated 6 years ago
- Code and instruction on replicating the experiments done in paper: Unified Hypersphere Embedding for Speaker Recognition☆31Updated 5 years ago
- pytorch implementation of lyre.ai's char2wav model☆32Updated 8 years ago
- Time Delayed NN implemented in pytorch☆81Updated 8 years ago
- Cross-lingual Voice Conversion☆97Updated 7 years ago
- A program for automatic speaker identification using deep learning techniques.☆84Updated 8 years ago
- Dialect identification using Siamese network☆15Updated 7 years ago
- Multiobjective Optimization Training of PLDA for Speaker Verification