matthijsvk / multimodalSRLinks
Multimodal speech recognition using lipreading (with CNNs) and audio (using LSTMs). Sensor fusion is done with an attention network.
☆69Updated 2 years ago
Alternatives and similar repositories for multimodalSR
Users that are interested in multimodalSR are comparing it to the libraries listed below
Sorting:
- A Pytorch implementation of 'AUTOMATIC SPEECH EMOTION RECOGNITION USING RECURRENT NEURAL NETWORKS WITH LOCAL ATTENTION'☆41Updated 6 years ago
- Implementation of the paper "Improved End-to-End Speech Emotion Recognition Using Self Attention Mechanism and Multitask Learning" From I…☆57Updated 4 years ago
- Supporting code for "Emotion Recognition in Speech using Cross-Modal Transfer in the Wild"☆102Updated 5 years ago
- TensorFlow implementation of "Attentive Modality Hopping for Speech Emotion Recognition," ICASSP-20☆32Updated 4 years ago
- Audio-Visual Speech Recognition using Deep Learning☆60Updated 6 years ago
- Audio-Visual Speech Recognition using Sequence to Sequence Models☆82Updated 4 years ago
- Inspired work by the project of SER using ELM at Microsoft Research☆19Updated 6 years ago
- Feature extraction of speech signal is the initial stage of any speech recognition system.☆92Updated 4 years ago
- [ICASSP19] An Interaction-aware Attention Network for Speech Emotion Recognition in Spoken Dialogs☆36Updated 5 years ago
- ☆59Updated 7 years ago
- Generalized cross-modal NNs; new audiovisual benchmark (IEEE TNNLS 2019)