thuiar / MMSA
MMSA is a unified framework for Multimodal Sentiment Analysis.
☆698Updated last month
Related projects ⓘ
Alternatives and complementary repositories for MMSA
- A Tool for extracting multimodal features from videos.☆141Updated last year
- Codes for paper "Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis"☆191Updated 2 years ago
- ☆170Updated last year
- Paper List for Multimodal Sentiment Analysis☆96Updated 3 years ago
- This repository contains various models targetting multimodal representation learning, multimodal fusion for downstream tasks such as mul…☆762Updated last year
- M-SENA: All-in-One Platform for Multimodal Sentiment Analysis☆82Updated 2 years ago
- Attention-based multimodal fusion for sentiment analysis☆326Updated 7 months ago
- MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis☆203Updated last year
- Make Acoustic and Visual Cues Matter: CH-SIMS v2.0 Dataset and AV-Mixup Consistent Module☆58Updated 2 years ago
- ☆157Updated 4 years ago
- [ACL'19] [PyTorch] Multimodal Transformer☆826Updated 2 years ago
- ☆182Updated 11 months ago
- Multi-Modality Multi-Loss Fusion Network☆62Updated 3 months ago
- ☆16Updated last year
- This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information M…☆166Updated last year
- 多模态情感分析——基于BERT+ResNet的多种融合方法☆235Updated 2 years ago
- Data parser for the CMU-MultimodalSDK package including parsing for CMU-MOSEI, CMU-MOSI, and POM datasets☆28Updated 3 months ago
- 多模态融合情感分析☆114Updated 4 years ago
- ☆55Updated 4 months ago
- Code for the paper "VistaNet: Visual Aspect Attention Network for Multimodal Sentiment Analysis", AAAI'19☆92Updated last year
- ☆194Updated 2 years ago
- SAEval: A benchmark for sentiment analysis to evaluate the model's performance on various subtasks.☆10Updated 6 months ago
- CM-BERT: Cross-Modal BERT for Text-Audio Sentiment Analysis(MM2020)☆108Updated 4 years ago
- [ACL 2024 SDT] OpenVNA is an open-source framework designed for analyzing the behavior of multimodal language understanding systems under…☆15Updated 5 months ago
- Multimodal Sarcasm Detection Dataset☆314Updated 3 months ago
- ☆33Updated 2 years ago
- Learning Language-guided Adaptive Hyper-modality Representation for Multimodal Sentiment Analysis☆73Updated last month
- Toolkits for Multimodal Emotion Recognition☆163Updated 5 months ago
- A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis☆119Updated 2 years ago