abedidev / ResNet-TCN
☆45Updated 3 years ago
Alternatives and similar repositories for ResNet-TCN
Users that are interested in ResNet-TCN are comparing it to the libraries listed below
Sorting:
- This repository provides implementation for the paper "Self-attention fusion for audiovisual emotion recognition with incomplete data".☆137Updated 8 months ago
- Implementation of Engagement Recognition using DAiSEE dataset☆19Updated 2 years ago
- ABAW3 (CVPRW): A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion Recognition☆45Updated last year
- Speech Emotion Classification with novel Parallel CNN-Transformer model built with PyTorch, plus thorough explanations of CNNs, Transform…☆250Updated 4 years ago
- ☆11Updated 4 years ago
- IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"☆38Updated 5 months ago
- Deep Attentive Center Loss☆61Updated 3 months ago
- Repository with the code of the paper: A proposal for Multimodal Emotion Recognition using auraltransformers and Action Units on RAVDESS …☆104Updated last year
- [AAAI'21] Robust Lightweight Facial Expression Recognition Network with Label Distribution Training☆204Updated last year
- Detecting depression levels in employees from videos of DAIC-WOZ dataset using LSTMs and Facial Action Units as input.☆27Updated 6 years ago
- The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.☆120Updated 3 years ago
- Official source code for the paper: "Reading Between the Frames Multi-Modal Non-Verbal Depression Detection in Videos"☆61Updated last year
- [AAAI 2020] Official implementation of VAANet for Emotion Recognition☆78Updated last year
- Code for the paper "Fusing Body Posture with Facial Expressions for Joint Recognition of Affect in Child-Robot Interaction"☆21Updated 3 years ago
- A Fully End2End Multimodal System for Fast Yet Effective Video Emotion Recognition☆39Updated 9 months ago
- We achieved the 2nd and 3rd places in ABAW3 and ABAW5, respectively.☆27Updated last year
- Two-stage Temporal Modelling Framework for Video-based Depression Recognition using Graph Representation☆24Updated 5 months ago
- ☆28Updated 2 years ago
- AVEC 2013 Continuous Audio/Visual Emotion and Depression Recognition Challenge☆23Updated 12 years ago
- This repository provides the codes for MMA-DFER: multimodal (audiovisual) emotion recognition method. This is an official implementation …☆32Updated 8 months ago
- A Pytorch implementation of emotion recognition from videos☆18Updated 4 years ago
- MAE-DFER: Efficient Masked Autoencoder for Self-supervised Dynamic Facial Expression Recognition (ACM MM 2023)☆113Updated 7 months ago
- ☆27Updated 3 years ago
- Multi-modal fusion framework based on Transformer Encoder☆14Updated 4 years ago
- Code on selecting an action based on multimodal inputs. Here in this case inputs are voice and text.☆73Updated 3 years ago
- A survey of deep multimodal emotion recognition.☆52Updated 3 years ago
- ☆7Updated last year
- code for: POSTER: A Pyramid Cross-Fusion Transformer Network for Facial Expression Recognition☆56Updated last year
- [ACM MM'21] Former-DFER: Dynamic Facial Expression Recognition Transformer☆80Updated 2 years ago
- Recognizing Micro-Expression in Video Clip with Adaptive Key-Frame Mining☆26Updated 4 years ago