cindyhu-cyber / Multimodal-Sentiment-AnalysisLinks
多模态融合情感分析
☆38Updated 4 years ago
Alternatives and similar repositories for Multimodal-Sentiment-Analysis
Users that are interested in Multimodal-Sentiment-Analysis are comparing it to the libraries listed below
Sorting:
- 多模态融合情感分析☆135Updated 5 years ago
- 多模态情感分析——基于BERT+ResNet的多种融合方法☆332Updated 3 years ago
- 该仓库存放了多模态情感分析实验的配套代码。☆42Updated 3 years ago
- A demo for multi-modal emotion recognition.(多模态情感识别demo)☆90Updated last year
- 多模态情感分析模型实现☆10Updated last year
- 多模态情绪识别☆21Updated 2 years ago
- 该仓库主要描述了CCAC2023多模态对话情绪识别评测第3名的实现过程☆11Updated last year
- 多模态情感分析☆12Updated last year
- MMSA is a unified framework for Multimodal Sentiment Analysis.☆916Updated 10 months ago
- A baseline for Weibo User Depression Detection Dataset (WU3D)☆35Updated 3 years ago
- ☆16Updated 2 years ago
- This repository provides the ability to recoginize the emotion from video using audiovisual modalities。端到端的多模态情感识别代码☆10Updated 2 years ago
- Codes for AoM: Detecting Aspect-oriented Information for Multimodal Aspect-Based Sentiment Analysis☆48Updated 2 years ago
- Make Acoustic and Visual Cues Matter: CH-SIMS v2.0 Dataset and AV-Mixup Consistent Module☆90Updated 3 years ago
- The source code and manually annotated datasets for our paper "Joint Multimodal Sentiment Analysis Based on Information Relevance"☆11Updated 2 years ago
- ☆198Updated 3 months ago
- A Tool for extracting multimodal features from videos.☆194Updated 2 years ago
- ☆79Updated last year
- Codes for paper "Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis"☆233Updated 3 years ago
- M-SENA: All-in-One Platform for Multimodal Sentiment Analysis☆96Updated 3 years ago
- Data parser for the CMU-MultimodalSDK package including parsing for CMU-MOSEI, CMU-MOSI, and POM datasets☆35Updated last year
- MultiModal Sentiment Analysis (Text and Audio) (Pytorch)☆19Updated 3 years ago
- the baseline model of CMDC corpus☆51Updated 3 years ago
- The code for our paper Hierarchical Interactive Multimodal Transformer for Aspect-Based Multimodal Sentiment Analysis☆20Updated 2 years ago
- [TOMM 2023] Emotion recognition methods through facial expression, speeches, audios, and multimodal data☆18Updated 2 years ago
- A published large-scale dataset - Weibo User Depression Detection Dataset.☆100Updated last month
- Codebase for EMNLP 2024 Findings Paper "Knowledge-Guided Dynamic Modality Attention Fusion Framework for Multimodal Sentiment Analysis"☆62Updated last year
- Multi-Modality Multi-Loss Fusion Network☆137Updated last year
- MultiEMO: An Attention-Based Correlation-Aware Multimodal Fusion Framework for Emotion Recognition in Conversations (ACL 2023)☆86Updated 2 years ago
- This repository provides implementation for the paper "Self-attention fusion for audiovisual emotion recognition with incomplete data".☆153Updated last year