tzirakis / Multimodal-Emotion-Recognition
This repository contains the code for the paper `End-to-End Multimodal Emotion Recognition using Deep Neural Networks`.
β239Updated 4 years ago
Alternatives and similar repositories for Multimodal-Emotion-Recognition:
Users that are interested in Multimodal-Emotion-Recognition are comparing it to the libraries listed below
- β108Updated 2 years ago
- Multi-modal Emotion detection from IEMOCAP on Speech, Text, Motion-Capture Data using Neural Nets.β162Updated 4 years ago
- π π A reading list focused on Multimodal Emotion Recognition (MER) ππ π π¬β120Updated 4 years ago
- Human Emotion Understanding using multimodal dataset.β93Updated 4 years ago
- Baseline scripts of the 8th Audio/Visual Emotion Challenge (AVEC 2018)β57Updated 6 years ago
- Lightweight and Interpretable ML Model for Speech Emotion Recognition and Ambiguity Resolution (trained on IEMOCAP dataset)β410Updated last year
- TensorFlow implementation of "Multimodal Speech Emotion Recognition using Audio and Text," IEEE SLT-18β269Updated 7 months ago
- Attention-based multimodal fusion for sentiment analysisβ333Updated 9 months ago
- This is a short tutorial for using the CMU-MultimodalSDK.β81Updated 5 years ago
- This repository provides implementation for the paper "Self-attention fusion for audiovisual emotion recognition with incomplete data".β120Updated 4 months ago
- β89Updated 6 years ago
- Repository for th OMG Emotion Challengeβ89Updated last month
- Baseline scripts for the Audio/Visual Emotion Challenge 2019β77Updated 2 years ago
- Multi-modal Speech Emotion Recogniton on IEMOCAP datasetβ87Updated last year
- A repository for emotion recognition from speech, text and mocap data from IEMOCAP datasetβ13Updated 6 years ago
- A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysisβ120Updated 2 years ago
- speech emotion recognition using a convolutional recurrent networks based on IEMOCAPβ391Updated 5 years ago
- Bidirectional LSTM network for speech emotion recognition.β263Updated 5 years ago
- Multimodal Emotion Recognition in a video using feature level fusion of audio and visual modalitiesβ14Updated 6 years ago
- The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.β116Updated 3 years ago
- Repository with the code of the paper: A proposal for Multimodal Emotion Recognition using auraltransformers and Action Units on RAVDESS β¦β100Updated 10 months ago
- Supporting code for "Emotion Recognition in Speech using Cross-Modal Transfer in the Wild"β102Updated 5 years ago
- The code repository for NAACL 2021 paper "Multimodal End-to-End Sparse Model for Emotion Recognition".β98Updated last year
- Modality-Transferable-MER, multimodal emotion recognition model with zero-shot and few-shot abilities.β61Updated 3 years ago
- A survey of deep multimodal emotion recognition.β53Updated 2 years ago
- code for Emotion Recognition in the Wild (EmotiW) challengeβ38Updated 6 years ago
- Multimodal preprocessing on IEMOCAP datasetβ11Updated 6 years ago
- Reproducing the baselines of the 2nd Multimodal Sentiment Analysis Challenge (MuSe 2021)β39Updated 3 years ago
- Pytorch Implementation of Tensor Fusion Networks for multimodal sentiment analysis.β180Updated 4 years ago
- Scripts used in the research described in the paper "Multimodal Emotion Recognition with High-level Speech and Text Features" accepted inβ¦β51Updated 3 years ago