Attention-based multimodal fusion for sentiment analysis
☆367Apr 8, 2024Updated last year
Alternatives and similar repositories for multimodal-sentiment-analysis
Users that are interested in multimodal-sentiment-analysis are comparing it to the libraries listed below
Sorting:
- Context-Dependent Sentiment Analysis in User-Generated Videos☆125Mar 14, 2023Updated 2 years ago
- Multimodal sentiment analysis using hierarchical fusion with context modeling☆44Mar 14, 2023Updated 2 years ago
- This repository contains various models targetting multimodal representation learning, multimodal fusion for downstream tasks such as mul…☆906Mar 15, 2023Updated 2 years ago
- MMSA is a unified framework for Multimodal Sentiment Analysis.☆964Jan 15, 2025Updated last year
- [ACL'19] [PyTorch] Multimodal Transformer☆961Sep 12, 2022Updated 3 years ago
- Pytorch Implementation of Tensor Fusion Networks for multimodal sentiment analysis.☆195Apr 5, 2020Updated 5 years ago
- Engaged in research to help improve to boost text sentiment analysis using facial features from video using machine learning.☆32Jan 12, 2018Updated 8 years ago
- Code for the paper "VistaNet: Visual Aspect Attention Network for Multimodal Sentiment Analysis", AAAI'19☆89Apr 1, 2023Updated 2 years ago
- MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis☆274Mar 14, 2023Updated 2 years ago
- This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information M…☆196Mar 14, 2023Updated 2 years ago
- This repository contains the implementation of the paper -- Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment An…☆72Apr 16, 2023Updated 2 years ago
- Codes for ACL2018 Multimodal Language Workshop paper☆10May 24, 2018Updated 7 years ago
- Official PyTorch implementation of Multilogue-Net (Best paper runner-up at Challenge-HML @ ACL 2020)☆58Dec 8, 2022Updated 3 years ago
- This repo contains implementation of different architectures for emotion recognition in conversations.☆1,500Mar 10, 2024Updated 2 years ago
- ☆48Feb 4, 2019Updated 7 years ago
- [AAAI 2018] Memory Fusion Network for Multi-view Sequential Learning☆114Aug 4, 2020Updated 5 years ago
- DeepCU: Integrating Both Common and Unique Latent Information for Multimodal Sentiment Analysis, IJCAI-19☆19Nov 21, 2019Updated 6 years ago
- Codes for paper "Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis"☆240Jun 25, 2022Updated 3 years ago
- A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis☆129Feb 25, 2025Updated last year
- This paper list is about multimodal sentiment analysis.☆33Jan 27, 2022Updated 4 years ago
- 多模态融合情感分析☆140May 15, 2020Updated 5 years ago
- TensorFlow implementation of "Multimodal Speech Emotion Recognition using Audio and Text," IEEE SLT-18☆297Jun 17, 2024Updated last year
- MultiModal Sentiment Analysis architectures for CMU-MOSEI.☆58Dec 9, 2022Updated 3 years ago
- This repository contains the code for the paper `End-to-End Multimodal Emotion Recognition using Deep Neural Networks`.☆252Jan 22, 2021Updated 5 years ago
- Fusion Modality Approaches for sentiment analysis and emotion recognition task.☆12Feb 5, 2021Updated 5 years ago
- [ICLR 2019] Learning Factorized Multimodal Representations☆67Aug 4, 2020Updated 5 years ago
- ☆67Aug 15, 2019Updated 6 years ago
- Lightweight and Interpretable ML Model for Speech Emotion Recognition and Ambiguity Resolution (trained on IEMOCAP dataset)☆444Dec 21, 2023Updated 2 years ago
- This is a short tutorial for using the CMU-MultimodalSDK.☆87Mar 20, 2019Updated 6 years ago
- TensorFlow implementation of "Attentive Modality Hopping for Speech Emotion Recognition," ICASSP-20☆33Aug 10, 2020Updated 5 years ago
- HGFM : A Hierarchical Grained and Feature Model for Acoustic Emotion Recgnition☆11Oct 30, 2020Updated 5 years ago
- MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation☆1,009Mar 10, 2024Updated 2 years ago
- This is the repository for "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors", Liu and Shen, et. al. ACL 2018☆273May 31, 2020Updated 5 years ago
- This repository presents UR-FUNNY dataset: first dataset for multimodal humor detection☆152Jan 13, 2021Updated 5 years ago
- ☆83Aug 9, 2021Updated 4 years ago
- Contextual Inter-modal Attention for Multi-modal Sentiment Analysis☆12Feb 24, 2021Updated 5 years ago
- Multi-task Learning for Multi-modal Emotion Recognition and Sentiment Analysis☆13Mar 17, 2021Updated 4 years ago
- A real time Multimodal Emotion Recognition web app for text, sound and video inputs☆1,070Apr 29, 2021Updated 4 years ago
- Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"☆81Jun 16, 2021Updated 4 years ago