ankurbhatia24 / MULTIMODAL-EMOTION-RECOGNITION
Human Emotion Understanding using multimodal dataset.
β97Updated 4 years ago
Alternatives and similar repositories for MULTIMODAL-EMOTION-RECOGNITION:
Users that are interested in MULTIMODAL-EMOTION-RECOGNITION are comparing it to the libraries listed below
- π π A reading list focused on Multimodal Emotion Recognition (MER) ππ π π¬β121Updated 4 years ago
- Repository with the code of the paper: A proposal for Multimodal Emotion Recognition using auraltransformers and Action Units on RAVDESS β¦β104Updated last year
- This repository contains the code for the paper `End-to-End Multimodal Emotion Recognition using Deep Neural Networks`.β245Updated 4 years ago
- Scripts used in the research described in the paper "Multimodal Emotion Recognition with High-level Speech and Text Features" accepted inβ¦β53Updated 3 years ago
- This repository provides implementation for the paper "Self-attention fusion for audiovisual emotion recognition with incomplete data".β133Updated 7 months ago
- A survey of deep multimodal emotion recognition.β52Updated 2 years ago
- A multimodal approach on emotion recognition using audio and text.β174Updated 4 years ago
- A Fully End2End Multimodal System for Fast Yet Effective Video Emotion Recognitionβ38Updated 8 months ago
- β17Updated last year
- The code repository for NAACL 2021 paper "Multimodal End-to-End Sparse Model for Emotion Recognition".β102Updated 2 years ago
- The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.β119Updated 3 years ago
- IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"β38Updated 4 months ago
- Lightweight and Interpretable ML Model for Speech Emotion Recognition and Ambiguity Resolution (trained on IEMOCAP dataset)β415Updated last year
- Detecting depression levels in employees from videos of DAIC-WOZ dataset using LSTMs and Facial Action Units as input.β27Updated 6 years ago
- MultiModal Sentiment Analysis architectures for CMU-MOSEI.β43Updated 2 years ago
- Multi-modal Emotion detection from IEMOCAP on Speech, Text, Motion-Capture Data using Neural Nets.β162Updated 4 years ago
- Multimodal preprocessing on IEMOCAP datasetβ11Updated 6 years ago
- TensorFlow implementation of "Multimodal Speech Emotion Recognition using Audio and Text," IEEE SLT-18β278Updated 10 months ago
- Reproducing the baselines of the 2nd Multimodal Sentiment Analysis Challenge (MuSe 2021)β40Updated 3 years ago
- This repository provides the codes for MMA-DFER: multimodal (audiovisual) emotion recognition method. This is an official implementation β¦β30Updated 7 months ago
- This is a short tutorial for using the CMU-MultimodalSDK.β84Updated 6 years ago
- Modality-Transferable-MER, multimodal emotion recognition model with zero-shot and few-shot abilities.β64Updated 4 years ago
- The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion Rβ¦β120Updated 4 years ago
- A Pytorch implementation of emotion recognition from videosβ18Updated 4 years ago
- A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysisβ124Updated 2 months ago
- Baseline scripts for the Audio/Visual Emotion Challenge 2019β79Updated 3 years ago
- β110Updated 2 years ago
- Automatic Depression Detection by Multi-model Ensemble. Based on DAIC-WOZ dataset.β33Updated 4 years ago
- Reproduction of DepAudioNet by Ma et al. {DepAudioNet: An Efficient Deep Model for Audio based Depression Classification,(https://dl.acm.β¦β75Updated 3 years ago
- β88Updated 2 years ago