razvan404 / multimodal-speech-emotion-recognitionLinks
Multimodal SER Model meant to be trained on recognising emotions from speech (text + acoustic data). Fine-tuned the DeBERTaV3 model, respectively the Wav2Vec2 model to extract the features and classify the emotions from the text, respectively audio data, then passed their features and their classification through an MLP to achieve better results…
☆11Updated last year
Alternatives and similar repositories for multimodal-speech-emotion-recognition
Users that are interested in multimodal-speech-emotion-recognition are comparing it to the libraries listed below
Sorting:
- Scripts used in the research described in the paper "Multimodal Emotion Recognition with High-level Speech and Text Features" accepted in…☆53Updated 4 years ago
- The code repository for NAACL 2021 paper "Multimodal End-to-End Sparse Model for Emotion Recognition".☆107Updated 2 years ago
- Code for the InterSpeech 2023 paper: MMER: Multimodal Multi-task learning for Speech Emotion Recognition☆78Updated last year
- The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion R…☆119Updated 4 years ago
- The code for Multi-Scale Receptive Field Graph Model for Emotion Recognition in Conversations☆11Updated 2 years ago
- The PyTorch code for paper: "CONSK-GCN: Conversational Semantic- and Knowledge-Oriented Graph Convolutional Network for Multimodal Emotio…☆13Updated 3 years ago
- Source code for ICASSP 2022 paper "MM-DFN: Multimodal Dynamic Fusion Network For Emotion Recognition in Conversations".☆92Updated 2 years ago
- Multimodal (text, acoustic, visual) Sentiment Analysis and Emotion Recognition on CMU-MOSEI dataset.☆28Updated 5 years ago
- Code for Speech Emotion Recognition with Co-Attention based Multi-level Acoustic Information☆162Updated 2 years ago
- ☆69Updated last year
- CM-BERT: Cross-Modal BERT for Text-Audio Sentiment Analysis(MM2020)☆115Updated 5 years ago
- ☆13Updated 2 years ago
- Implementation of the paper "Multimodal Transformer With Learnable Frontend and Self Attention for Emotion Recognition" submitted to ICAS…☆24Updated 4 years ago
- [IEEE ICPRS 2024 Oral] TensorFlow code implementation of "MultiMAE-DER: Multimodal Masked Autoencoder for Dynamic Emotion Recognition"☆19Updated 4 months ago
- This repository provides implementation for the paper "Self-attention fusion for audiovisual emotion recognition with incomplete data".☆155Updated last year
- ☆19Updated last year
- [EMNLP2023] Conversation Understanding using Relational Temporal Graph Neural Networks with Auxiliary Cross-Modality Interaction☆60Updated last year
- A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations (ACL 2023)☆73Updated last year
- MultiModal Sentiment Analysis architectures for CMU-MOSEI.☆54Updated 3 years ago
- Chinese BERT classification with tf2.0 and audio classification with mfcc☆13Updated 5 years ago
- A Fully End2End Multimodal System for Fast Yet Effective Video Emotion Recognition☆40Updated last year
- Source code for paper Multi-Task Learning for Depression Detection in Dialogs (SIGDial 2022)☆12Updated 11 months ago
- A multimodal approach on emotion recognition using audio and text.☆186Updated 5 years ago
- AuxFormer: Robust Approach to Audiovisual Emotion Recognition☆14Updated 2 years ago
- This is the official code for paper "Speech Emotion Recognition with Global-Aware Fusion on Multi-scale Feature Representation" published…☆48Updated 3 years ago
- Repository with the code of the paper: A proposal for Multimodal Emotion Recognition using auraltransformers and Action Units on RAVDESS …☆113Updated last year
- Multimodal datasets.☆33Updated last year
- Multimodal Transformer for Korean Sentiment Analysis with Audio and Text Features☆28Updated 4 years ago
- TensorFlow implementation of "Attentive Modality Hopping for Speech Emotion Recognition," ICASSP-20☆33Updated 5 years ago
- ☆95Updated 3 years ago