A2Zadeh / CMU-MultimodalSDK
☆738Updated this week
Related projects: ⓘ
- [ACL'19] [PyTorch] Multimodal Transformer☆800Updated 2 years ago
- Attention-based multimodal fusion for sentiment analysis☆323Updated 5 months ago
- MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation☆787Updated 6 months ago
- This repository contains various models targetting multimodal representation learning, multimodal fusion for downstream tasks such as mul…☆718Updated last year
- This is a short tutorial for using the CMU-MultimodalSDK.☆77Updated 5 years ago
- Pytorch Implementation of Tensor Fusion Networks for multimodal sentiment analysis.☆171Updated 4 years ago
- MMSA is a unified framework for Multimodal Sentiment Analysis.☆642Updated 8 months ago
- [AAAI 2018] Memory Fusion Network for Multi-view Sequential Learning☆114Updated 4 years ago
- This repository contains the code for the paper `End-to-End Multimodal Emotion Recognition using Deep Neural Networks`.☆231Updated 3 years ago
- A comprehensive reading list for Emotion Recognition in Conversations☆251Updated 7 months ago
- A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis☆117Updated last year
- 🔆 📝 A reading list focused on Multimodal Emotion Recognition (MER) 👂👄 👀 💬☆120Updated 3 years ago
- Lightweight and Interpretable ML Model for Speech Emotion Recognition and Ambiguity Resolution (trained on IEMOCAP dataset)☆396Updated 8 months ago
- ☆193Updated 2 years ago
- This repo contains implementation of different architectures for emotion recognition in conversations.☆1,329Updated 6 months ago
- Context-Dependent Sentiment Analysis in User-Generated Videos☆123Updated last year
- Multimodal Sarcasm Detection Dataset☆302Updated 3 weeks ago
- Codes for paper "Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis"☆181Updated 2 years ago
- Multi-modal Emotion detection from IEMOCAP on Speech, Text, Motion-Capture Data using Neural Nets.☆158Updated 3 years ago
- This is the repository for "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors", Liu and Shen, et. al. ACL 2018☆252Updated 4 years ago
- Fusion Modality Approaches for sentiment analysis and emotion recognition task.☆12Updated 3 years ago
- MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis☆187Updated last year
- [NeurIPS 2021] Multiscale Benchmarks for Multimodal Representation Learning☆472Updated 7 months ago
- Paper List for Multimodal Sentiment Analysis☆93Updated 3 years ago
- Official PyTorch implementation of Multilogue-Net (Best paper runner-up at Challenge-HML @ ACL 2020)☆59Updated last year
- The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.☆109Updated 3 years ago
- TensorFlow implementation of "Multimodal Speech Emotion Recognition using Audio and Text," IEEE SLT-18☆254Updated 3 months ago
- PyTorch code for EMNLP 2019 paper "LXMERT: Learning Cross-Modality Encoder Representations from Transformers".☆925Updated last year
- BLOCK (AAAI 2019), with a multimodal fusion library for deep learning models☆341Updated 4 years ago
- Human Emotion Understanding using multimodal dataset.☆81Updated 4 years ago