frankaging / Multimodal-Transformer
Attention Based Multi-modal Emotion Recognition; Stanford Emotional Narratives Dataset
☆17Updated 5 years ago
Alternatives and similar repositories for Multimodal-Transformer:
Users that are interested in Multimodal-Transformer are comparing it to the libraries listed below
- Accompany code to reproduce the baselines of the International Multimodal Sentiment Analysis Challenge (MuSe 2020).☆16Updated 2 years ago
- Multimodal Emotion Recognition in a video using feature level fusion of audio and visual modalities☆14Updated 6 years ago
- Supplementary codes for the K-EmoCon dataset☆24Updated 3 years ago
- A survey of deep multimodal emotion recognition.☆53Updated 2 years ago
- Generalized cross-modal NNs; new audiovisual benchmark (IEEE TNNLS 2019)☆25Updated 4 years ago
- 🔆 📝 A reading list focused on Multimodal Emotion Recognition (MER) 👂👄 👀 💬☆120Updated 4 years ago
- Baseline scripts of the 8th Audio/Visual Emotion Challenge (AVEC 2018)☆57Updated 6 years ago
- IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"☆34Updated last month
- IEEE Transactions on Affective Computing 2023☆16Updated 11 months ago
- FG2021: Cross Attentional AV Fusion for Dimensional Emotion Recognition☆26Updated last month
- Video classification using the UCF101 dataset for action recognition. We extract SIFT, MFCC and STIP features from the videos, we encode …☆28Updated 4 years ago
- Repository for th OMG Emotion Challenge☆88Updated last month
- ☆11Updated 5 years ago
- A list of pain recognition databases that are publicly available for research☆73Updated 3 years ago
- Code for the paper "Fusing Body Posture with Facial Expressions for Joint Recognition of Affect in Child-Robot Interaction"☆20Updated 3 years ago
- Toolbox for Emotion Analysis using Physiological signals☆58Updated 2 years ago
- Multimodal Fusion, Multimodal Sentiment Analysis☆21Updated 4 years ago
- [AAAI2021] A repository of Contrastive Adversarial Learning for Person-independent FER☆14Updated 3 years ago
- The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.☆116Updated 3 years ago
- ABAW3 (CVPRW): A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion Recognition☆38Updated last year
- PyTorch code for "M³T: Multi-Modal Multi-Task Learning for Continuous Valence-Arousal Estimation"☆24Updated 4 years ago
- Tool for online Valence and Arousal annotation.☆35Updated 4 years ago
- Using deep recurrent networks to recognize horses' pain expressions in video.☆27Updated 2 years ago
- Reproducing the baselines of the 2nd Multimodal Sentiment Analysis Challenge (MuSe 2021)☆39Updated 3 years ago
- Multimodal Deep Learning Framework for Mental Disorder Recognition @ FG'20☆38Updated 2 years ago
- ☆17Updated 2 years ago
- ☆15Updated 4 years ago
- A Pytorch implementation of emotion recognition from videos☆16Updated 4 years ago
- [ICLR 2019] Learning Factorized Multimodal Representations☆67Updated 4 years ago