MiloQ / MELD-Sentiment-AnalysisLinks
MultiModal Sentiment Analysis (Text and Audio) (Pytorch)
☆17Updated 3 years ago
Alternatives and similar repositories for MELD-Sentiment-Analysis
Users that are interested in MELD-Sentiment-Analysis are comparing it to the libraries listed below
Sorting:
- Make Acoustic and Visual Cues Matter: CH-SIMS v2.0 Dataset and AV-Mixup Consistent Module☆80Updated 2 years ago
- ☆197Updated 2 years ago
- A Tool for extracting multimodal features from videos.☆176Updated 2 years ago
- Multi-Modality Multi-Loss Fusion Network☆127Updated last year
- Multimodal (text, acoustic, visual) Sentiment Analysis and Emotion Recognition on CMU-MOSEI dataset.☆27Updated 4 years ago
- ☆65Updated last year
- Codes for paper "Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis"☆218Updated 3 years ago
- MMSA is a unified framework for Multimodal Sentiment Analysis.☆841Updated 6 months ago
- M-SENA: All-in-One Platform for Multimodal Sentiment Analysis☆91Updated 3 years ago
- MultiEMO: An Attention-Based Correlation-Aware Multimodal Fusion Framework for Emotion Recognition in Conversations (ACL 2023)☆75Updated last year
- Efficient Multimodal Transformer with Dual-Level Feature Restoration for Robust Multimodal Sentiment Analysis (TAC 2023)☆62Updated 10 months ago
- ☆18Updated 2 years ago
- Codebase for EMNLP 2024 Findings Paper "Knowledge-Guided Dynamic Modality Attention Fusion Framework for Multimodal Sentiment Analysis"☆46Updated 8 months ago
- ☆33Updated 11 months ago
- This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multim…☆37Updated 2 years ago
- Toolkits for Multimodal Emotion Recognition☆237Updated 2 months ago
- MultiModal Sentiment Analysis architectures for CMU-MOSEI.☆46Updated 2 years ago
- ☆17Updated 11 months ago
- Codes for AoM: Detecting Aspect-oriented Information for Multimodal Aspect-Based Sentiment Analysis☆43Updated 2 years ago
- CM-BERT: Cross-Modal BERT for Text-Audio Sentiment Analysis(MM2020)☆113Updated 4 years ago
- A demo for multi-modal emotion recognition.(多模态情感识别demo)☆89Updated last year
- Data parser for the CMU-MultimodalSDK package including parsing for CMU-MOSEI, CMU-MOSI, and POM datasets☆34Updated 11 months ago
- [TMM2022] Source codes of CENet☆34Updated 2 years ago
- MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis☆247Updated 2 years ago
- ☆73Updated 8 months ago
- The offical realization of InstructERC☆141Updated 2 months ago
- Code for Findings of ACL 2021 paper: "A Text-Centered Shared-Private Framework via Cross-Modal Prediction for Multimodal Sentiment Analys…☆28Updated 2 years ago
- Source code for ICASSP 2022 paper "MM-DFN: Multimodal Dynamic Fusion Network For Emotion Recognition in Conversations".☆89Updated 2 years ago
- Learning Language-guided Adaptive Hyper-modality Representation for Multimodal Sentiment Analysis (ALMT)☆119Updated 4 months ago
- Code for MMLatch: Bottom-up Top-down Fusion for Multimodal Sentiment Analysis https://arxiv.org/abs/2201.09828 (to be presented in ICASSP…☆35Updated 2 years ago