praveena2j / JointCrossAttentional-AV-FusionLinks
ABAW3 (CVPRW): A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion Recognition
☆45Updated last year
Alternatives and similar repositories for JointCrossAttentional-AV-Fusion
Users that are interested in JointCrossAttentional-AV-Fusion are comparing it to the libraries listed below
Sorting:
- FG2021: Cross Attentional AV Fusion for Dimensional Emotion Recognition☆29Updated 6 months ago
- IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"☆38Updated 6 months ago
- PyTorch implementation for Audio-Visual Domain Adaptation Feature Fusion for Speech Emotion Recognition☆12Updated 3 years ago
- ☆14Updated 3 years ago
- A survey of deep multimodal emotion recognition.☆52Updated 3 years ago
- This repository provides the codes for MMA-DFER: multimodal (audiovisual) emotion recognition method. This is an official implementation …☆35Updated 8 months ago
- Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"☆81Updated 3 years ago
- [AAAI 2020] Official implementation of VAANet for Emotion Recognition☆78Updated last year
- [CVPR 2023] Code for "Learning Emotion Representations from Verbal and Nonverbal Communication"☆48Updated 3 months ago
- We achieved the 2nd and 3rd places in ABAW3 and ABAW5, respectively.☆27Updated last year
- ☆27Updated 3 years ago
- A Fully End2End Multimodal System for Fast Yet Effective Video Emotion Recognition☆40Updated 9 months ago
- ABAW6 (CVPR-W) We achieved second place in the valence arousal challenge of ABAW6☆22Updated last year
- ☆11Updated 4 years ago
- This repository provides implementation for the paper "Self-attention fusion for audiovisual emotion recognition with incomplete data".☆140Updated 8 months ago
- ☆19Updated 4 years ago
- Pytorch implementation for Tailor Versatile Multi-modal Learning for Multi-label Emotion Recognition☆60Updated 2 years ago
- The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.☆120Updated 3 years ago
- Modality-Invariant Temporal Representation Learning☆18Updated 2 years ago
- MAE-DFER: Efficient Masked Autoencoder for Self-supervised Dynamic Facial Expression Recognition (ACM MM 2023)☆115Updated 8 months ago
- Two-stage Temporal Modelling Framework for Video-based Depression Recognition using Graph Representation☆25Updated 5 months ago
- ☆61Updated 10 months ago
- AVEC 2013 Continuous Audio/Visual Emotion and Depression Recognition Challenge☆23Updated 12 years ago
- This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multim…☆36Updated 2 years ago
- ☆28Updated 2 years ago
- ☆11Updated 6 years ago
- An official implementation of "Decoupled Multimodal Distilling for Emotion Recognition" in PyTorch. (CVPR 2023 highlight)☆115Updated 2 years ago
- "MULTIMODAL EMOTION RECOGNITION BASED ON DEEP TEMPORAL FEATURES USING CROSS-MODAL TRANSFORMER AND SELF-ATTENTION" ICASSP'23☆19Updated 2 years ago
- Efficient Multimodal Transformer with Dual-Level Feature Restoration for Robust Multimodal Sentiment Analysis (TAC 2023)☆61Updated 8 months ago
- [ACM ICMR'25]Official repository for "eMotions: A Large-Scale Dataset for Emotion Recognition in Short Videos"☆33Updated 11 months ago