Robin-WZQ / multimodal-emotion-recognition-DEMOView external linksLinks
A demo for multi-modal emotion recognition.(多模态情感识别demo)
☆91Apr 2, 2024Updated last year
Alternatives and similar repositories for multimodal-emotion-recognition-DEMO
Users that are interested in multimodal-emotion-recognition-DEMO are comparing it to the libraries listed below
Sorting:
- 多模态融合情感分析☆139May 15, 2020Updated 5 years ago
- This repository provides implementation for the paper "Self-attention fusion for audiovisual emotion recognition with incomplete data".☆159Sep 16, 2024Updated last year
- The code repository for NAACL 2021 paper "Multimodal End-to-End Sparse Model for Emotion Recognition".☆108Feb 9, 2023Updated 3 years ago
- Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"☆81Jun 16, 2021Updated 4 years ago
- 该仓库主要描述了CCAC2023多模态对话情绪识别评测第3名的实现过程☆11Aug 11, 2024Updated last year
- A fine multimodality fusion network :)☆11Aug 9, 2021Updated 4 years ago
- [AAAI 2020] Official implementation of VAANet for Emotion Recognition☆83Oct 3, 2023Updated 2 years ago
- A Pytorch implementation of emotion recognition from videos☆19Sep 15, 2020Updated 5 years ago
- This repository provides the codes for MMA-DFER: multimodal (audiovisual) emotion recognition method. This is an official implementation …☆50Sep 16, 2024Updated last year
- This repository provides the ability to recoginize the emotion from video using audiovisual modalities。端到端的多模态情感识别代码☆11Mar 5, 2023Updated 2 years ago
- Multimodal (text, acoustic, visual) Sentiment Analysis and Emotion Recognition on CMU-MOSEI dataset.☆29Nov 8, 2020Updated 5 years ago
- Multi-modal Speech Emotion Recogniton on IEMOCAP dataset☆95Jul 6, 2023Updated 2 years ago
- The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.☆123Sep 20, 2021Updated 4 years ago
- 国科大人机交互大作业:多模态情感识别☆132Apr 26, 2022Updated 3 years ago
- AuxFormer: Robust Approach to Audiovisual Emotion Recognition☆14Mar 14, 2023Updated 2 years ago
- Modality-Transferable-MER, multimodal emotion recognition model with zero-shot and few-shot abilities.☆66Apr 23, 2021Updated 4 years ago
- A real time Multimodal Emotion Recognition web app for text, sound and video inputs☆1,067Apr 29, 2021Updated 4 years ago
- 多模态,语音和文本结合的情感识别,大模型finetune☆23Nov 19, 2023Updated 2 years ago
- 多模态情绪识别☆24Aug 11, 2023Updated 2 years ago
- This repository contains the code for the paper `End-to-End Multimodal Emotion Recognition using Deep Neural Networks`.☆252Jan 22, 2021Updated 5 years ago
- 多模态情感分析——基于BERT+ResNet的多种融合方法☆353Nov 20, 2022Updated 3 years ago
- Code for the InterSpeech 2023 paper: MMER: Multimodal Multi-task learning for Speech Emotion Recognition☆81Mar 12, 2024Updated last year
- [EMNLP2023] Conversation Understanding using Relational Temporal Graph Neural Networks with Auxiliary Cross-Modality Interaction☆62Jul 8, 2024Updated last year
- This project is the official implementation of ``Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation'' in PyTorch, wh…☆12Nov 4, 2022Updated 3 years ago
- Source code for ICASSP 2022 paper "MM-DFN: Multimodal Dynamic Fusion Network For Emotion Recognition in Conversations".☆92Apr 21, 2023Updated 2 years ago
- M-SENA: All-in-One Platform for Multimodal Sentiment Analysis☆99Mar 30, 2022Updated 3 years ago
- Multimodal SER Model meant to be trained on recognising emotions from speech (text + acoustic data). Fine-tuned the DeBERTaV3 model, resp…☆11Jun 19, 2024Updated last year
- MMSA is a unified framework for Multimodal Sentiment Analysis.☆955Jan 15, 2025Updated last year
- 该仓库存放了多模态情感分析实验的配套代码。☆42Sep 5, 2022Updated 3 years ago
- ☆10Oct 4, 2022Updated 3 years ago
- Human Emotion Understanding using multimodal dataset.☆110Jul 27, 2020Updated 5 years ago
- A Compact and Effective Pretrained Model for Speech Emotion Recognition☆53Jun 29, 2024Updated last year
- Attention-based multimodal fusion for sentiment analysis☆366Apr 8, 2024Updated last year
- 这个项目将 RAVDESS 数据集切割成 1s 短语音,利用 openSMILE+CNN 进行训练,目标是将短语音分类到四种情感中,分别是:开心(happy)、悲伤(sad)、生气(angry)和中性(neutral)。最后准确率达到 76% 左右。☆64Jun 16, 2021Updated 4 years ago
- Paper List for Multimodal Sentiment Analysis☆107Jan 14, 2021Updated 5 years ago
- Scripts used in the research described in the paper "Multimodal Emotion Recognition with High-level Speech and Text Features" accepted in…☆52Sep 14, 2021Updated 4 years ago
- MultimodalSDK provides tools to easily apply machine learning algorithms on well-known affective computing datasets such as CMU-MOSI, CMU…☆14Jan 18, 2018Updated 8 years ago
- ☆11Nov 18, 2021Updated 4 years ago
- Codes for paper "Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis"☆240Jun 25, 2022Updated 3 years ago