bubaimaji / cmt-mser
"MULTIMODAL EMOTION RECOGNITION BASED ON DEEP TEMPORAL FEATURES USING CROSS-MODAL TRANSFORMER AND SELF-ATTENTION" ICASSP'23
☆18Updated last year
Alternatives and similar repositories for cmt-mser:
Users that are interested in cmt-mser are comparing it to the libraries listed below
- ☆13Updated 4 months ago
- IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"☆36Updated 2 months ago
- This repository contains the code for our ICASSP paper `Speech Emotion Recognition using Semantic Information` https://arxiv.org/pdf/2103…☆24Updated 3 years ago
- This is the official code for paper "Speech Emotion Recognition with Global-Aware Fusion on Multi-scale Feature Representation" published…☆46Updated 2 years ago
- Implementation of the paper "Multimodal Transformer With Learnable Frontend and Self Attention for Emotion Recognition" submitted to ICAS…☆24Updated 3 years ago
- SpeechFormer++ in PyTorch☆47Updated last year
- ☆32Updated 5 months ago
- PyTorch implementation for Audio-Visual Domain Adaptation Feature Fusion for Speech Emotion Recognition☆12Updated 2 years ago
- [ICASSP 2023] Mingling or Misalignment? Temporal Shift for Speech Emotion Recognition with Pre-trained Representations☆36Updated last year
- AuxFormer: Robust Approach to Audiovisual Emotion Recognition☆14Updated last year
- FRAME-LEVEL EMOTIONAL STATE ALIGNMENT METHOD FOR SPEECH EMOTION RECOGNITION☆18Updated last month
- [IEEE, TASLP, 2023] The code of the paper "Multi-Source Discriminant Subspace Alignment for Cross-Domain Speech Emotion Recognition".☆20Updated 4 months ago
- ABAW6 (CVPR-W) We achieved second place in the valence arousal challenge of ABAW6☆17Updated 9 months ago
- ☆41Updated 4 years ago
- Scripts used in the research described in the paper "Multimodal Emotion Recognition with High-level Speech and Text Features" accepted in…☆52Updated 3 years ago
- A Fully End2End Multimodal System for Fast Yet Effective Video Emotion Recognition☆33Updated 6 months ago
- The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.☆115Updated 3 years ago
- ☆13Updated 8 months ago
- Code for Speech Emotion Recognition with Co-Attention based Multi-level Acoustic Information☆137Updated last year
- The code for Multi-Scale Receptive Field Graph Model for Emotion Recognition in Conversations☆10Updated 2 years ago
- 多模态,语音和文本结合的情感识别,大模型finetune☆14Updated last year
- Deformable Speech Transformer (DST)☆28Updated 6 months ago
- Code for the InterSpeech 2023 paper: MMER: Multimodal Multi-task learning for Speech Emotion Recognition☆71Updated 11 months ago
- ☆22Updated last year
- Efficient Multimodal Transformer with Dual-Level Feature Restoration for Robust Multimodal Sentiment Analysis (TAC 2023)☆57Updated 4 months ago
- The PyTorch code for paper: "CONSK-GCN: Conversational Semantic- and Knowledge-Oriented Graph Convolutional Network for Multimodal Emotio…☆11Updated 2 years ago
- Here the code of EmoAudioNet is a deep neural network for speech classification (published in ICPR 2020)☆11Updated 4 years ago
- Official implementation of the paper "SPEAKER VGG CCT: Cross-corpus Speech Emotion Recognition with Speaker Embedding and Vision Transfor…☆20Updated 2 years ago
- ☆40Updated 2 years ago