razvan404 / multimodal-speech-emotion-recognitionLinks
Multimodal SER Model meant to be trained on recognising emotions from speech (text + acoustic data). Fine-tuned the DeBERTaV3 model, respectively the Wav2Vec2 model to extract the features and classify the emotions from the text, respectively audio data, then passed their features and their classification through an MLP to achieve better results…
☆10Updated last year
Alternatives and similar repositories for multimodal-speech-emotion-recognition
Users that are interested in multimodal-speech-emotion-recognition are comparing it to the libraries listed below
Sorting:
- Scripts used in the research described in the paper "Multimodal Emotion Recognition with High-level Speech and Text Features" accepted in…☆53Updated 4 years ago
- The code repository for NAACL 2021 paper "Multimodal End-to-End Sparse Model for Emotion Recognition".☆107Updated 2 years ago
- Code for the InterSpeech 2023 paper: MMER: Multimodal Multi-task learning for Speech Emotion Recognition☆76Updated last year
- CM-BERT: Cross-Modal BERT for Text-Audio Sentiment Analysis(MM2020)☆114Updated 5 years ago
- The PyTorch code for paper: "CONSK-GCN: Conversational Semantic- and Knowledge-Oriented Graph Convolutional Network for Multimodal Emotio…☆13Updated 3 years ago
- Source code for ICASSP 2022 paper "MM-DFN: Multimodal Dynamic Fusion Network For Emotion Recognition in Conversations".☆93Updated 2 years ago
- ☆12Updated 2 years ago
- The code for Multi-Scale Receptive Field Graph Model for Emotion Recognition in Conversations☆11Updated 2 years ago
- The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion R…☆119Updated 4 years ago
- Implementation of the paper "Multimodal Transformer With Learnable Frontend and Self Attention for Emotion Recognition" submitted to ICAS…☆24Updated 4 years ago
- ☆68Updated last year
- Code for paper "MIR-GAN: Refining Frame-Level Modality-Invariant Representations with Adversarial Network for Audio-Visual Speech Recogni…☆16Updated 2 years ago
- Accompany code to reproduce the baselines of the International Multimodal Sentiment Analysis Challenge (MuSe 2020).☆16Updated 2 years ago
- [IEEE ICPRS 2024 Oral] TensorFlow code implementation of "MultiMAE-DER: Multimodal Masked Autoencoder for Dynamic Emotion Recognition"☆19Updated 3 months ago
- This is the official code for paper "Speech Emotion Recognition with Global-Aware Fusion on Multi-scale Feature Representation" published…☆48Updated 3 years ago
- Chinese BERT classification with tf2.0 and audio classification with mfcc☆13Updated 4 years ago
- Multimodal (text, acoustic, visual) Sentiment Analysis and Emotion Recognition on CMU-MOSEI dataset.☆27Updated 5 years ago
- A Fully End2End Multimodal System for Fast Yet Effective Video Emotion Recognition☆40Updated last year
- A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations (ACL 2023)☆73Updated last year
- Multimodal datasets.☆32Updated last year
- Code for Speech Emotion Recognition with Co-Attention based Multi-level Acoustic Information☆159Updated last year
- Source code for paper Multi-Task Learning for Depression Detection in Dialogs (SIGDial 2022)☆11Updated 10 months ago
- A multimodal approach on emotion recognition using audio and text.☆184Updated 5 years ago
- Group Gated Fusion on Attention-based Bidirectional Alignment for Multimodal Emotion Recognition☆14Updated 3 years ago
- FRAME-LEVEL EMOTIONAL STATE ALIGNMENT METHOD FOR SPEECH EMOTION RECOGNITION☆23Updated 10 months ago
- AuxFormer: Robust Approach to Audiovisual Emotion Recognition☆14Updated 2 years ago
- This repository provides implementation for the paper "Self-attention fusion for audiovisual emotion recognition with incomplete data".☆153Updated last year
- ☆94Updated 2 years ago
- Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition☆31Updated 4 years ago
- MultiModal Sentiment Analysis architectures for CMU-MOSEI.☆50Updated 2 years ago