praveena2j / Joint-Cross-Attention-for-Audio-Visual-FusionLinks
IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"
☆43Updated 9 months ago
Alternatives and similar repositories for Joint-Cross-Attention-for-Audio-Visual-Fusion
Users that are interested in Joint-Cross-Attention-for-Audio-Visual-Fusion are comparing it to the libraries listed below
Sorting:
- A Fully End2End Multimodal System for Fast Yet Effective Video Emotion Recognition☆40Updated last year
- This repository provides implementation for the paper "Self-attention fusion for audiovisual emotion recognition with incomplete data".☆145Updated last year
- ABAW6 (CVPR-W) We achieved second place in the valence arousal challenge of ABAW6☆28Updated last year
- ABAW3 (CVPRW): A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion Recognition☆47Updated last year
- Efficient Multimodal Transformer with Dual-Level Feature Restoration for Robust Multimodal Sentiment Analysis (TAC 2023)☆66Updated 2 weeks ago
- ☆14Updated 11 months ago
- The code repository for NAACL 2021 paper "Multimodal End-to-End Sparse Model for Emotion Recognition".☆105Updated 2 years ago
- ☆63Updated last year
- [EMNLP2023] Conversation Understanding using Relational Temporal Graph Neural Networks with Auxiliary Cross-Modality Interaction☆62Updated last year
- ☆28Updated last year
- [IEEE ICPRS 2024 Oral] TensorFlow code implementation of "MultiMAE-DER: Multimodal Masked Autoencoder for Dynamic Emotion Recognition"☆19Updated last month
- A survey of deep multimodal emotion recognition.☆54Updated 3 years ago
- MultiModal Sentiment Analysis architectures for CMU-MOSEI.☆49Updated 2 years ago
- "MULTIMODAL EMOTION RECOGNITION BASED ON DEEP TEMPORAL FEATURES USING CROSS-MODAL TRANSFORMER AND SELF-ATTENTION" ICASSP'23☆21Updated 2 years ago
- FRAME-LEVEL EMOTIONAL STATE ALIGNMENT METHOD FOR SPEECH EMOTION RECOGNITION☆23Updated 8 months ago
- ☆70Updated last year
- Repository with the code of the paper: A proposal for Multimodal Emotion Recognition using auraltransformers and Action Units on RAVDESS …☆107Updated last year
- This repository provides the codes for MMA-DFER: multimodal (audiovisual) emotion recognition method. This is an official implementation …☆42Updated last year
- This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multim…☆39Updated 2 years ago
- The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.☆121Updated 3 years ago
- AuxFormer: Robust Approach to Audiovisual Emotion Recognition☆14Updated 2 years ago
- Code for the InterSpeech 2023 paper: MMER: Multimodal Multi-task learning for Speech Emotion Recognition☆75Updated last year
- ☆14Updated 3 years ago
- Code for Speech Emotion Recognition with Co-Attention based Multi-level Acoustic Information☆152Updated last year
- 多模态,语音和文本结合的情感识别,大模型finetune☆21Updated last year
- We achieved the 2nd and 3rd places in ABAW3 and ABAW5, respectively.☆30Updated last year
- MultiEMO: An Attention-Based Correlation-Aware Multimodal Fusion Framework for Emotion Recognition in Conversations (ACL 2023)☆79Updated last year
- ☆33Updated last year
- Baseline scripts for the Audio/Visual Emotion Challenge 2019☆80Updated 3 years ago
- Scripts used in the research described in the paper "Multimodal Emotion Recognition with High-level Speech and Text Features" accepted in…☆53Updated 4 years ago