ajinkyaT / Lip_Reading_in_the_Wild_AVSR
Audio-Visual Speech Recognition using Deep Learning
☆60Updated 6 years ago
Alternatives and similar repositories for Lip_Reading_in_the_Wild_AVSR:
Users that are interested in Lip_Reading_in_the_Wild_AVSR are comparing it to the libraries listed below
- Audio-Visual Speech Recognition using Sequence to Sequence Models☆82Updated 4 years ago
- Torch code for using Residual Networks with LSTMs for Lipreading☆99Updated 6 years ago
- Lip Reading in the Wild using ResNet and LSTMs in PyTorch☆59Updated 6 years ago
- Supporting code for "Emotion Recognition in Speech using Cross-Modal Transfer in the Wild"☆101Updated 5 years ago
- Python toolkit for Visual Speech Recognition☆37Updated 4 years ago
- Pytorch code for End-to-End Audiovisual Speech Recognition☆175Updated 2 years ago
- My experiments in lip reading using deep learning with the LRW dataset☆51Updated 4 years ago
- processing and extracting of face and mouth image files out of the TCDTIMIT database☆45Updated 4 years ago
- Face Landmark-based Speaker-Independent Audio-Visual Speech Enhancement in Multi-Talker Environments☆107Updated last year
- The proposed method in LRW-1000: A Naturally-Distributed Large-Scale Benchmark for Lip Reading in the Wild☆25Updated 6 years ago
- DenseNet3D Model In "LRW-1000: A Naturally-Distributed Large-Scale Benchmark for Lip Reading in the Wild", https://arxiv.org/abs/1810.069…☆118Updated 4 years ago
- "LipNet: End-to-End Sentence-level Lipreading" in PyTorch☆68Updated 5 years ago
- ☆64Updated 6 years ago
- Include some core functions and model to handle speech separation☆155Updated 3 years ago
- Implementation of the paper "Improved End-to-End Speech Emotion Recognition Using Self Attention Mechanism and Multitask Learning" From I…☆57Updated 4 years ago
- CN-Celeb, a large-scale Chinese celebrities dataset published by Center for Speech and Language Technology (CSLT) at Tsinghua University.☆72Updated 5 years ago
- AVSpeech downloader☆67Updated 6 years ago
- Keras version of Syncnet, by Joon Son Chung and Andrew Zisserman.☆51Updated 6 years ago
- Multimodal speech recognition using lipreading (with CNNs) and audio (using LSTMs). Sensor fusion is done with an attention network.☆68Updated 2 years ago
- Adversarial Auto-encoders for Speech Based Emotion Recogntion