MyungsuChae / IROS2018_wsLinks
End-to-end multimodal emotion and gender recognition with dynamic weights of joint loss
☆9Updated 6 years ago
Alternatives and similar repositories for IROS2018_ws
Users that are interested in IROS2018_ws are comparing it to the libraries listed below
Sorting:
- Generalized cross-modal NNs; new audiovisual benchmark (IEEE TNNLS 2019)☆27Updated 5 years ago
- PyTorch code for "M³T: Multi-Modal Multi-Task Learning for Continuous Valence-Arousal Estimation"☆24Updated 5 years ago
- Baseline scripts of the 8th Audio/Visual Emotion Challenge (AVEC 2018)☆59Updated 6 years ago
- ☆110Updated 2 years ago
- [ICCV'21] The Right to Talk: An Audio-Visual Transformer Approach☆20Updated 3 years ago
- ☆89Updated 6 years ago
- Accompany code to reproduce the baselines of the International Multimodal Sentiment Analysis Challenge (MuSe 2020).☆16Updated 2 years ago
- [AAAI2021] A repository of Contrastive Adversarial Learning for Person-independent FER☆16Updated 3 years ago
- PersEmoN: A Deep Network for Joint Analysis of Apparent Personality, Emotion and Their Relationship☆12Updated 5 years ago
- Code for the paper: Audio-Visual Model Distillation Using Acoustic Images☆21Updated 2 years ago
- Supporting code for "Emotion Recognition in Speech using Cross-Modal Transfer in the Wild"☆103Updated 5 years ago
- The proposed method in LRW-1000: A Naturally-Distributed Large-Scale Benchmark for Lip Reading in the Wild☆26Updated 6 years ago
- MIMAMO Net: Integrating Micro- and Macro-motion for Video Emotion Recognition☆60Updated 4 years ago
- Repository for th OMG Emotion Challenge☆92Updated 6 months ago
- code for Emotion Recognition in the Wild (EmotiW) challenge☆38Updated 6 years ago
- Code for the Active Speakers in Context Paper (CVPR2020)☆54Updated 4 years ago
- Adversarial Unsupervised Domain Adaptation for Acoustic Scene Classification☆35Updated 6 years ago
- convenience utilities for model validation☆23Updated 6 years ago
- TF code for our CVPR2020 paper "Discriminative Multi-modality Speech Recognition"☆26Updated 3 years ago
- Implementation of the paper "Improved End-to-End Speech Emotion Recognition Using Self Attention Mechanism and Multitask Learning" From I…☆57Updated 4 years ago
- Video classification using the UCF101 dataset for action recognition. We extract SIFT, MFCC and STIP features from the videos, we encode …☆28Updated 4 years ago
- Code for our paper "Acoustic Features Fusion using Attentive Multi-channel Deep Architecture" in Keras and tensorflow☆26Updated 6 years ago
- Tool for online Valence and Arousal annotation.☆35Updated 4 years ago
- Video classification, youtube8m, Knowledge distillation, Tensorflow, NeXtVLAD☆27Updated 5 years ago
- Multimodal speech recognition using lipreading (with CNNs) and audio (using LSTMs). Sensor fusion is done with an attention network.☆69Updated 2 years ago
- Multimodal preprocessing on IEMOCAP dataset☆12Updated 7 years ago
- ☆11Updated 6 years ago
- Code for Group-Level Emotion Recognition Using Hybrid Deep Models Based on Faces, Scenes, Skeletons and Visual Attentions☆17Updated 6 years ago
- Official implementation of FOP method as described in "Fusion and Orthogonal Projection for Improved Face-Voice Association"☆19Updated last year
- A Pytorch implementation of 'AUTOMATIC SPEECH EMOTION RECOGNITION USING RECURRENT NEURAL NETWORKS WITH LOCAL ATTENTION'☆41Updated 6 years ago