matthijsvk / multimodalSRLinks
Multimodal speech recognition using lipreading (with CNNs) and audio (using LSTMs). Sensor fusion is done with an attention network.
☆69Updated 2 years ago
Alternatives and similar repositories for multimodalSR
Users that are interested in multimodalSR are comparing it to the libraries listed below
Sorting:
- TensorFlow implementation of "Attentive Modality Hopping for Speech Emotion Recognition," ICASSP-20☆32Updated 4 years ago
- Supporting code for "Emotion Recognition in Speech using Cross-Modal Transfer in the Wild"☆103Updated 5 years ago
- [ICASSP19] An Interaction-aware Attention Network for Speech Emotion Recognition in Spoken Dialogs☆35Updated 5 years ago
- Implementation of the paper "Improved End-to-End Speech Emotion Recognition Using Self Attention Mechanism and Multitask Learning" From I…☆57Updated 4 years ago
- The details that matter: Frequency resolution of spectrograms in acoustic scene classification - paper replication data☆39Updated 7 years ago
- A Pytorch implementation of 'AUTOMATIC SPEECH EMOTION RECOGNITION USING RECURRENT NEURAL NETWORKS WITH LOCAL ATTENTION'☆41Updated 6 years ago
- ☆110Updated 2 years ago
- Audio-Visual Speech Recognition using Deep Learning☆60Updated 6 years ago
- Baseline scripts of the 8th Audio/Visual Emotion Challenge (AVEC 2018)☆59Updated 6 years ago
- Audio-Visual Speech Recognition using Sequence to Sequence Models☆82Updated 4 years ago
- Speech Emotion Recognition Using Deep Convolutional Neural Network and Discriminant Temporal Pyramid Matching☆52Updated 7 years ago
- Inspired work by the project of SER using ELM at Microsoft Research☆19Updated 6 years ago
- Classify the emotions from variable-length speech segments☆11Updated 7 years ago
- Adversarial Auto-encoders for Speech Based Emotion Recogntion☆14Updated 6 years ago
- openXBOW - the Passau Open-Source Crossmodal Bag-of-Words Toolkit☆82Updated 4 years ago
- Feature extraction of speech signal is the initial stage of any speech recognition system.☆93Updated 4 years ago
- Repository of code for Speech emotion recognition using voiced speech and attention model, submitted to ICSigSys 2019☆13Updated 5 years ago
- This code implements a basic MLP for speech recognition. The MLP is trained with pytorch, while feature extraction, alignments, and dec…☆38Updated 7 years ago
- CTC for emotion recognition☆60Updated 8 years ago
- Adversarial Unsupervised Domain Adaptation for Acoustic Scene Classification☆35Updated 6 years ago
- Generalized cross-modal NNs; new audiovisual benchmark (IEEE TNNLS 2019)☆27Updated 5 years ago
- processing and extracting of face and mouth image files out of the TCDTIMIT database☆45Updated 4 years ago
- Face Landmark-based Speaker-Independent Audio-Visual Speech Enhancement in Multi-Talker Environments☆108Updated last year
- ☆59Updated 7 years ago
- Multi-modal Speech Emotion Recogniton on IEMOCAP dataset☆89Updated last year
- Code for our paper "Acoustic Features Fusion using Attentive Multi-channel Deep Architecture" in Keras and tensorflow☆26Updated 6 years ago
- Neural network based similarity scoring for diarization (pytorch implementation of "LSTM based Similarity Measurement with Spectral Clust…☆44Updated 4 years ago
- Live demo for speech emotion recognition using Keras and Tensorflow models☆39Updated 10 months ago
- Python toolkit for Visual Speech Recognition☆37Updated 5 years ago
- Repository for Weak Label Learning for Audio Events - A closer look. Uses Audioset subset data provided for reproducibility.☆32Updated last year