tzirakis / Multimodal-Emotion-Recognition
This repository contains the code for the paper `End-to-End Multimodal Emotion Recognition using Deep Neural Networks`.
β245Updated 4 years ago
Alternatives and similar repositories for Multimodal-Emotion-Recognition:
Users that are interested in Multimodal-Emotion-Recognition are comparing it to the libraries listed below
- π π A reading list focused on Multimodal Emotion Recognition (MER) ππ π π¬β121Updated 4 years ago
- Multi-modal Emotion detection from IEMOCAP on Speech, Text, Motion-Capture Data using Neural Nets.β160Updated 4 years ago
- β110Updated 2 years ago
- Human Emotion Understanding using multimodal dataset.β96Updated 4 years ago
- β89Updated 6 years ago
- Baseline scripts of the 8th Audio/Visual Emotion Challenge (AVEC 2018)β58Updated 6 years ago
- This repository provides implementation for the paper "Self-attention fusion for audiovisual emotion recognition with incomplete data".β128Updated 6 months ago
- This is a short tutorial for using the CMU-MultimodalSDK.β82Updated 6 years ago
- TensorFlow implementation of "Multimodal Speech Emotion Recognition using Audio and Text," IEEE SLT-18β277Updated 9 months ago
- Repository for th OMG Emotion Challengeβ88Updated 3 months ago
- A Fully End2End Multimodal System for Fast Yet Effective Video Emotion Recognitionβ37Updated 7 months ago
- Attention-based multimodal fusion for sentiment analysisβ345Updated 11 months ago
- Multi-modal Speech Emotion Recogniton on IEMOCAP datasetβ89Updated last year
- Lightweight and Interpretable ML Model for Speech Emotion Recognition and Ambiguity Resolution (trained on IEMOCAP dataset)β414Updated last year
- The code repository for NAACL 2021 paper "Multimodal End-to-End Sparse Model for Emotion Recognition".β99Updated 2 years ago
- A repository for emotion recognition from speech, text and mocap data from IEMOCAP datasetβ13Updated 6 years ago
- The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.β117Updated 3 years ago
- β17Updated last year
- A survey of deep multimodal emotion recognition.β52Updated 2 years ago
- Repository with the code of the paper: A proposal for Multimodal Emotion Recognition using auraltransformers and Action Units on RAVDESS β¦β105Updated last year
- Multimodal Emotion Recognition in a video using feature level fusion of audio and visual modalitiesβ15Updated 6 years ago
- β14Updated 6 years ago
- Baseline scripts for the Audio/Visual Emotion Challenge 2019β77Updated 3 years ago
- A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysisβ122Updated last month
- Predict valence and arousal from video data. Separate training of CNN and RNN. Feed RNN with simple feature vectors extracted from framesβ¦β41Updated 4 years ago
- [AAAI 2018] Memory Fusion Network for Multi-view Sequential Learningβ113Updated 4 years ago
- Scripts used in the research described in the paper "Multimodal Emotion Recognition with High-level Speech and Text Features" accepted inβ¦β52Updated 3 years ago
- Multimodal emotion recognition system of attention based vision network + audio networkβ14Updated 4 years ago
- Modality-Transferable-MER, multimodal emotion recognition model with zero-shot and few-shot abilities.β62Updated 3 years ago
- Multimodal (text, acoustic, visual) Sentiment Analysis and Emotion Recognition on CMU-MOSEI dataset.β25Updated 4 years ago