abedidev / ResNet-TCN
☆42Updated 3 years ago
Alternatives and similar repositories for ResNet-TCN:
Users that are interested in ResNet-TCN are comparing it to the libraries listed below
- Implementation of Engagement Recognition using DAiSEE dataset☆18Updated last year
- Deep Attentive Center Loss☆62Updated 3 years ago
- Emonet unofficial Implemented "Estimation of continuous valence and arousal levels from faces in naturalistic conditions" published in Na…☆19Updated 2 years ago
- Speech Emotion Classification with novel Parallel CNN-Transformer model built with PyTorch, plus thorough explanations of CNNs, Transform…☆241Updated 4 years ago
- This repository provides implementation for the paper "Self-attention fusion for audiovisual emotion recognition with incomplete data".☆120Updated 4 months ago
- ☆11Updated 3 years ago
- Two-stage Temporal Modelling Framework for Video-based Depression Recognition using Graph Representation☆21Updated last month
- Detecting depression levels in employees from videos of DAIC-WOZ dataset using LSTMs and Facial Action Units as input.☆25Updated 5 years ago
- IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"☆35Updated 2 months ago
- ☆28Updated 2 years ago
- ABAW3 (CVPRW): A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion Recognition☆38Updated last year
- A Pytorch implementation of emotion recognition from videos☆16Updated 4 years ago
- code for: POSTER: A Pyramid Cross-Fusion Transformer Network for Facial Expression Recognition☆49Updated last year
- Code for the paper "Fusing Body Posture with Facial Expressions for Joint Recognition of Affect in Child-Robot Interaction"☆20Updated 3 years ago
- Automatic Recognition of Student Engagement using Deep Learning and Facial Expression☆68Updated 3 years ago
- Skeleton-Based Emotion Recognition Based on Two-Stream Self-Attention Enhanced Spatial-Temporal Graph Convolutional Network☆12Updated 4 years ago
- ☆65Updated 4 months ago
- The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.☆116Updated 3 years ago
- This is the repository containing the solution for FG-2020 ABAW Competition☆115Updated 8 months ago
- Official source code for the paper: "Reading Between the Frames Multi-Modal Non-Verbal Depression Detection in Videos"☆53Updated 8 months ago
- A jupyter notebook showing how to finetune the vision transformer on a facial expression dataset (FER-2013)☆30Updated 3 years ago
- A survey of deep multimodal emotion recognition.☆53Updated 2 years ago
- [TIP'21] Learning Deep Global Multi-scale and Local Attention Features for Facial Expression Recognition in the Wild☆86Updated last year
- Recognizing Micro-Expression in Video Clip with Adaptive Key-Frame Mining☆26Updated 3 years ago
- Reproducing the baselines of the 2nd Multimodal Sentiment Analysis Challenge (MuSe 2021)☆39Updated 3 years ago
- [AAAI 2020] Official implementation of VAANet for Emotion Recognition☆77Updated last year
- A Fully End2End Multimodal System for Fast Yet Effective Video Emotion Recognition☆32Updated 5 months ago
- [MM'21] Former-DFER: Dynamic Facial Expression Recognition Transformer☆77Updated 2 years ago
- We achieved the 2nd and 3rd places in ABAW3 and ABAW5, respectively.☆27Updated 10 months ago
- the baseline model of CMDC corpus☆36Updated 2 years ago