MyungsuChae / IROS2018_ws
End-to-end multimodal emotion and gender recognition with dynamic weights of joint loss
☆10Updated 6 years ago
Related projects ⓘ
Alternatives and complementary repositories for IROS2018_ws
- PyTorch code for "M³T: Multi-Modal Multi-Task Learning for Continuous Valence-Arousal Estimation"☆23Updated 4 years ago
- Generalized cross-modal NNs; new audiovisual benchmark (IEEE TNNLS 2019)☆25Updated 4 years ago
- ☆108Updated 2 years ago
- ☆88Updated 6 years ago
- (2020) Video Classification Neural Network☆30Updated 4 years ago
- Code for the paper: Audio-Visual Model Distillation Using Acoustic Images☆20Updated last year
- FG2021: Cross Attentional AV Fusion for Dimensional Emotion Recognition☆26Updated last year
- Multimodal preprocessing on IEMOCAP dataset☆12Updated 6 years ago
- [AAAI2021] A repository of Contrastive Adversarial Learning for Person-independent FER☆14Updated 2 years ago
- convenience utilities for model validation☆23Updated 5 years ago
- Accompany code to reproduce the baselines of the International Multimodal Sentiment Analysis Challenge (MuSe 2020).☆16Updated last year
- Repository for th OMG Emotion Challenge☆87Updated 5 years ago
- Emonet unofficial Implemented "Estimation of continuous valence and arousal levels from faces in naturalistic conditions" published in Na…☆18Updated last year
- 🔆 📝 A reading list focused on Multimodal Emotion Recognition (MER) 👂👄 👀 💬☆119Updated 4 years ago
- Multimodal sentiment analysis using hierarchical fusion with context modeling☆44Updated last year
- MIMAMO Net: Integrating Micro- and Macro-motion for Video Emotion Recognition☆58Updated 3 years ago
- Listen to Look: Action Recognition by Previewing Audio (CVPR 2020)☆127Updated 3 years ago
- Baseline scripts of the 8th Audio/Visual Emotion Challenge (AVEC 2018)☆57Updated 6 years ago
- ☆10Updated 5 years ago
- code for Emotion Recognition in the Wild (EmotiW) challenge☆37Updated 5 years ago
- Supporting code for "Emotion Recognition in Speech using Cross-Modal Transfer in the Wild"☆101Updated 5 years ago
- Attention Based Multi-modal Emotion Recognition; Stanford Emotional Narratives Dataset☆17Updated 5 years ago
- Video classification, youtube8m, Knowledge distillation, Tensorflow, NeXtVLAD☆26Updated 5 years ago
- PersEmoN: A Deep Network for Joint Analysis of Apparent Personality, Emotion and Their Relationship☆11Updated 4 years ago
- M-VAD Names Dataset. Multimedia Tools and Applications (2019)☆21Updated 5 years ago
- Submission to the Affective Behavior Analysis in-the-wild (ABAW) 2020 competition.☆35Updated last year
- Multimodal speech recognition using lipreading (with CNNs) and audio (using LSTMs). Sensor fusion is done with an attention network.☆66Updated last year
- ☆11Updated 6 years ago
- ☆17Updated 2 years ago