Vincent-ZHQ / DMERView external linksLinks
A survey of deep multimodal emotion recognition.
β54May 6, 2022Updated 3 years ago
Alternatives and similar repositories for DMER
Users that are interested in DMER are comparing it to the libraries listed below
Sorting:
- π π A reading list focused on Multimodal Emotion Recognition (MER) ππ π π¬β128Oct 6, 2020Updated 5 years ago
- Modality-Transferable-MER, multimodal emotion recognition model with zero-shot and few-shot abilities.β66Apr 23, 2021Updated 4 years ago
- ATTENTION AGGREGATION NETWORK FOR AUDIO-VISUAL EMOTION RECOGNITIONβ13Sep 25, 2023Updated 2 years ago
- β10May 12, 2023Updated 2 years ago
- The code repository for NAACL 2021 paper "Multimodal End-to-End Sparse Model for Emotion Recognition".β108Feb 9, 2023Updated 3 years ago
- Score Normalization for NIST 2019 Speaker Recognition Evaluationβ10Nov 8, 2019Updated 6 years ago
- β14Jan 17, 2023Updated 3 years ago
- J-Net is aimed for audio separation with randomly weighted encoder.β12Oct 23, 2019Updated 6 years ago
- Rainbow Keywords - Official PyTorch Implementationβ13Jun 27, 2024Updated last year
- We achieved the 2nd and 3rd places in ABAW3 and ABAW5, respectively.β31Mar 7, 2024Updated last year
- β28May 13, 2022Updated 3 years ago
- ICASSP 2023: "Recursive Joint Attention for Audio-Visual Fusion in Regression Based Emotion Recognition"β14Nov 29, 2024Updated last year
- Multimodal preprocessing on IEMOCAP datasetβ13Jun 8, 2018Updated 7 years ago
- This paper list is about multimodal sentiment analysis.β32Jan 27, 2022Updated 4 years ago
- a optional way to extract audio featureβ13Jun 10, 2017Updated 8 years ago
- Speech Emotion Recognition using transfer learning with wav2vec on IEMOCAP.β17Aug 8, 2021Updated 4 years ago
- PyTorch implementation for Audio-Visual Domain Adaptation Feature Fusion for Speech Emotion Recognitionβ12Mar 20, 2022Updated 3 years ago
- Baseline scripts for AVEC 2019, Depression Detection Sub-challengeβ16Jul 11, 2019Updated 6 years ago
- β14Sep 24, 2021Updated 4 years ago
- This is a public repository for RATS Channel-A Speech Data, which is a chargeable noisy speech dataset under LDC. Here we release its Logβ¦β16Oct 22, 2022Updated 3 years ago
- [ACM MM 2023] Official PyTorch implementation of "Emo-DNA: Emotion Decoupling and Alignment Learning for Cross-Corpus Speech Emotion Recoβ¦β12Aug 4, 2023Updated 2 years ago
- Code for Speech Emotion Recognition with Co-Attention based Multi-level Acoustic Informationβ164Nov 27, 2023Updated 2 years ago
- MMSA is a unified framework for Multimodal Sentiment Analysis.β955Jan 15, 2025Updated last year
- β20Apr 22, 2024Updated last year
- β22Aug 26, 2021Updated 4 years ago
- CM-BERT: Cross-Modal BERT for Text-Audio Sentiment AnalysisοΌMM2020οΌβ115Oct 14, 2020Updated 5 years ago
- Detect emotion from audio signals of IEMOCAP dataset using multi-modal approach. Utilized acoustic features, mel-spectrogram and text as β¦β41Mar 7, 2024Updated last year
- β19Apr 28, 2023Updated 2 years ago
- This repository contains a short introduction on the topic of audio and speech processing -- from basics to applications.β21Dec 20, 2023Updated 2 years ago
- [AAAI 2020] Official implementation of VAANet for Emotion Recognitionβ83Oct 3, 2023Updated 2 years ago
- This paper presents our winning submission to Subtask 2 of SemEval 2024 Task 3 on multimodal emotion cause analysis in conversations.β24Aug 2, 2024Updated last year
- Official implementation of the paper "SPEAKER VGG CCT: Cross-corpus Speech Emotion Recognition with Speaker Embedding and Vision Transforβ¦β24Feb 17, 2023Updated 2 years ago
- A Large-scale, Multi-modal, Compound Affective Database for Dynamic Facial Expression Recognition in the Wild.β62Dec 30, 2025Updated last month
- PHO-LID: A Unified Model to Incorporate Acoustic-Phonetic and Phonotactic Information for Language Identificationβ21Aug 24, 2023Updated 2 years ago
- An awesome spoken LID repository. (Working in progressβ109Apr 22, 2024Updated last year
- β27Oct 7, 2021Updated 4 years ago
- β26May 8, 2022Updated 3 years ago
- This repository contains various models targetting multimodal representation learning, multimodal fusion for downstream tasks such as mulβ¦β905Mar 15, 2023Updated 2 years ago
- β28Nov 6, 2023Updated 2 years ago