EIHW / MuSe-2023Links
☆18Updated 2 years ago
Alternatives and similar repositories for MuSe-2023
Users that are interested in MuSe-2023 are comparing it to the libraries listed below
Sorting:
- A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations (ACL 2023)☆68Updated 9 months ago
- Multimodal Emotion Recognition in Conversation Challenge( CCAC 2023)☆39Updated last year
- ☆65Updated last year
- Source code for ICASSP 2022 paper "MM-DFN: Multimodal Dynamic Fusion Network For Emotion Recognition in Conversations".☆89Updated 2 years ago
- M3ED: Multi-modal Multi-scene Multi-label Emotional Dialogue Database. ACL 2022☆112Updated 2 years ago
- ☆26Updated 3 years ago
- ☆24Updated 2 years ago
- Make Acoustic and Visual Cues Matter: CH-SIMS v2.0 Dataset and AV-Mixup Consistent Module☆80Updated 2 years ago
- The offical realization of InstructERC☆140Updated 2 months ago
- MultiEMO: An Attention-Based Correlation-Aware Multimodal Fusion Framework for Emotion Recognition in Conversations (ACL 2023)☆75Updated last year
- Code for Findings of ACL 2022 Paper "Sentiment Word Aware Multimodal Refinement for Multimodal Sentiment Analysis with ASR Errors"☆26Updated 3 years ago
- Toolkits for Multimodal Emotion Recognition☆237Updated 2 months ago
- ☆25Updated 2 years ago
- [EMNLP2023] Conversation Understanding using Relational Temporal Graph Neural Networks with Auxiliary Cross-Modality Interaction☆63Updated last year
- ☆13Updated last year
- The code repository for NAACL 2021 paper "Multimodal End-to-End Sparse Model for Emotion Recognition".☆104Updated 2 years ago
- ☆197Updated 2 years ago
- ☆28Updated 3 years ago
- ☆92Updated 2 years ago
- This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multim…☆37Updated 2 years ago
- Efficient Multimodal Transformer with Dual-Level Feature Restoration for Robust Multimodal Sentiment Analysis (TAC 2023)☆62Updated 10 months ago
- Code for the InterSpeech 2023 paper: MMER: Multimodal Multi-task learning for Speech Emotion Recognition☆75Updated last year
- CM-BERT: Cross-Modal BERT for Text-Audio Sentiment Analysis(MM2020)☆113Updated 4 years ago
- A Unimodal Valence-Arousal Driven Contrastive Learning Framework for Multimodal Multi-Label Emotion Recognition (ACM MM 2024 oral)☆23Updated 9 months ago
- ☆33Updated 11 months ago
- Source code for paper Multi-Task Learning for Depression Detection in Dialogs (SIGDial 2022)☆10Updated 6 months ago
- MIntRec: A New Dataset for Multimodal Intent Recognition (ACM MM 2022)☆100Updated 3 months ago
- code for "Supervised Prototypical Contrastive Learning for Emotion Recognition in Conversation, EMNLP 22"☆77Updated 2 years ago
- Pytorch implementation for Tailor Versatile Multi-modal Learning for Multi-label Emotion Recognition☆60Updated 2 years ago
- ☆23Updated 5 months ago