matthijsvk / multimodalSR
Multimodal speech recognition using lipreading (with CNNs) and audio (using LSTMs). Sensor fusion is done with an attention network.
☆69Updated 2 years ago
Alternatives and similar repositories for multimodalSR:
Users that are interested in multimodalSR are comparing it to the libraries listed below
- Implementation of the paper "Improved End-to-End Speech Emotion Recognition Using Self Attention Mechanism and Multitask Learning" From I…☆57Updated 4 years ago
- Baseline scripts of the 8th Audio/Visual Emotion Challenge (AVEC 2018)☆58Updated 6 years ago
- Audio-Visual Speech Recognition using Sequence to Sequence Models☆82Updated 4 years ago
- Supporting code for "Emotion Recognition in Speech using Cross-Modal Transfer in the Wild"☆101Updated 5 years ago
- Adversarial Unsupervised Domain Adaptation for Acoustic Scene Classification☆35Updated 6 years ago
- ☆110Updated 2 years ago
- Audio-Visual Speech Recognition using Deep Learning☆60Updated 6 years ago
- A Pytorch implementation of 'AUTOMATIC SPEECH EMOTION RECOGNITION USING RECURRENT NEURAL NETWORKS WITH LOCAL ATTENTION'☆41Updated 6 years ago
- The details that matter: Frequency resolution of spectrograms in acoustic scene classification - paper replication data☆39Updated 7 years ago
- Inspired work by the project of SER using ELM at Microsoft Research☆19Updated 6 years ago
- Repository for Weak Label Learning for Audio Events - A closer look. Uses Audioset subset data provided for reproducibility.☆32Updated last year
- TensorFlow implementation of "Attentive Modality Hopping for Speech Emotion Recognition," ICASSP-20☆32Updated 4 years ago
- Implementation of the paper "Attentive Statistics Pooling for Deep Speaker Embedding" in Pytorch☆43Updated 4 years ago
- [ICASSP19] An Interaction-aware Attention Network for Speech Emotion Recognition in Spoken Dialogs☆35Updated 4 years ago
- End to End Multiview Lip Reading☆10Updated 7 years ago
- This code implements a basic MLP for speech recognition. The MLP is trained with pytorch, while feature extraction, alignments, and dec…☆38Updated 7 years ago
- Classify the emotions from variable-length speech segments☆11Updated 7 years ago
- Code for Yun Wang's PhD Thesis: Polyphonic Sound Event Detection with Weak Labeling☆166Updated 2 years ago
- Face Landmark-based Speaker-Independent Audio-Visual Speech Enhancement in Multi-Talker Environments☆107Updated last year
- Adversarial Auto-encoders for Speech Based Emotion Recogntion☆14Updated 6 years ago
- Generalized cross-modal NNs; new audiovisual benchmark (IEEE TNNLS 2019)☆26Updated 5 years ago
- Feature extraction of speech signal is the initial stage of any speech recognition system.☆92Updated 4 years ago
- ☆58Updated 7 years ago
- ☆60Updated 4 years ago
- SE-Resnet+AMSoftmax for Speaker Verification☆47Updated 6 years ago
- DCASE 2018 Baseline systems☆129Updated 5 years ago
- processing and extracting of face and mouth image files out of the TCDTIMIT database☆45Updated 4 years ago
- Code for our paper "Acoustic Features Fusion using Attentive Multi-channel Deep Architecture" in Keras and tensorflow☆26Updated 6 years ago
- ☆99Updated 7 years ago
- openXBOW - the Passau Open-Source Crossmodal Bag-of-Words Toolkit☆81Updated 4 years ago