declare-lab / hfusion
Multimodal sentiment analysis using hierarchical fusion with context modeling
☆44Updated last year
Alternatives and similar repositories for hfusion:
Users that are interested in hfusion are comparing it to the libraries listed below
- Context-Dependent Sentiment Analysis in User-Generated Videos☆123Updated last year
- Engaged in research to help improve to boost text sentiment analysis using facial features from video using machine learning.☆32Updated 7 years ago
- Contextual inter modal attention for multimodal sentiment analysis☆44Updated 3 years ago
- ☆62Updated 5 years ago
- implementation for the paper "Select-Additive Learning: Improving Cross-individual Generalization in Multimodal Sentiment Analysis"☆21Updated 7 years ago
- [ICASSP19] An Interaction-aware Attention Network for Speech Emotion Recognition in Spoken Dialogs☆35Updated 4 years ago
- Multi-modal Emotion detection from IEMOCAP on Speech, Text, Motion-Capture Data using Neural Nets.☆161Updated 4 years ago
- Official PyTorch implementation of Multilogue-Net (Best paper runner-up at Challenge-HML @ ACL 2020)☆57Updated 2 years ago
- [AAAI 2018] Memory Fusion Network for Multi-view Sequential Learning☆114Updated 4 years ago
- ☆11Updated 7 years ago
- Attention-based multimodal fusion for sentiment analysis☆337Updated 10 months ago
- Implementation of the paper "Hierarchical GRU for Utterance-level Emotion Recognition" in NAACL-2019.☆68Updated 4 years ago
- This repository contains the code for the paper `End-to-End Multimodal Emotion Recognition using Deep Neural Networks`.☆239Updated 4 years ago
- A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis☆120Updated 2 years ago
- Implementation of the paper "Real-Time Emotion Recognition via Attention Gated Hierarchical Memory Network" in AAAI-2020.☆30Updated 2 years ago
- The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion R…☆117Updated 3 years ago
- Accompany code to reproduce the baselines of the International Multimodal Sentiment Analysis Challenge (MuSe 2020).☆16Updated 2 years ago
- This is a short tutorial for using the CMU-MultimodalSDK.☆81Updated 5 years ago
- This paper list is about multimodal sentiment analysis.☆31Updated 3 years ago
- This code has been developed for detecting sentiment in videos using Convolutional Neural Network and Multiple Kernel Learning.☆27Updated 7 years ago
- Multimodal Affective Analysis Using Hierarchical Attention Strategy☆12Updated 6 years ago
- Fusion Modality Approaches for sentiment analysis and emotion recognition task.☆12Updated 4 years ago
- Codes for ACL2018 Multimodal Language Workshop paper☆11Updated 6 years ago
- ☆199Updated 3 years ago
- ☆48Updated 6 years ago
- TensorFlow implementation of "Attentive Modality Hopping for Speech Emotion Recognition," ICASSP-20☆32Updated 4 years ago
- Pytorch Implementation of Tensor Fusion Networks for multimodal sentiment analysis.☆183Updated 4 years ago
- Modality-Transferable-MER, multimodal emotion recognition model with zero-shot and few-shot abilities.☆61Updated 3 years ago
- Live demo for speech emotion recognition using Keras and Tensorflow models☆39Updated 6 months ago
- Baseline scripts of the 8th Audio/Visual Emotion Challenge (AVEC 2018)☆57Updated 6 years ago