PALMJJ / Multimodal-short-video-classificationLinks
Multimodal short video classification task, integrating video, image, audio and text modes for short video classification
☆19Updated 5 years ago
Alternatives and similar repositories for Multimodal-short-video-classification
Users that are interested in Multimodal-short-video-classification are comparing it to the libraries listed below
Sorting:
- Multimodal Fusion, Multimodal Sentiment Analysis☆23Updated 5 years ago
- Multimodal classification solution for the SIGIR eCOM using Co-attention and transformer language models☆19Updated 5 years ago
- A Pytorch implementation of emotion recognition from videos☆19Updated 5 years ago
- ☆27Updated 4 years ago
- Code on selecting an action based on multimodal inputs. Here in this case inputs are voice and text.☆73Updated 4 years ago
- Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition☆31Updated 4 years ago
- Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"☆81Updated 4 years ago
- The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.☆122Updated 4 years ago
- This repository contains the implementation of the paper -- Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment An…☆71Updated 2 years ago
- [AAAI 2020] Official implementation of VAANet for Emotion Recognition☆81Updated 2 years ago
- A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis☆125Updated 9 months ago
- Pytorch Implementation of Tensor Fusion Networks for multimodal sentiment analysis.☆193Updated 5 years ago
- CM-BERT: Cross-Modal BERT for Text-Audio Sentiment Analysis(MM2020)☆114Updated 5 years ago
- Multi-modal Multi-label Emotion Recognition with Heterogeneous Hierarchical Message Passing☆18Updated 3 years ago
- This is the repository for "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors", Liu and Shen, et. al. ACL 2018☆270Updated 5 years ago
- DeepCU: Integrating Both Common and Unique Latent Information for Multimodal Sentiment Analysis, IJCAI-19☆19Updated 6 years ago
- ☆213Updated 3 years ago
- Codes for paper "Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis"☆231Updated 3 years ago
- Multi-model analysis of sentiment and emotion in multi-speaker conversations.☆27Updated 2 years ago
- Implementation of the paper "Real-Time Emotion Recognition via Attention Gated Hierarchical Memory Network" in AAAI-2020.☆31Updated 3 years ago
- This paper list is about multimodal sentiment analysis.☆32Updated 3 years ago
- codes for: Modality to Modality Translation: An Adversarial Representation Learning and Graph Fusion Network for Multimodal Fusion☆48Updated 4 years ago
- Code for the paper "VistaNet: Visual Aspect Attention Network for Multimodal Sentiment Analysis", AAAI'19☆90Updated 2 years ago
- ☆19Updated 3 years ago
- ☆16Updated 5 years ago
- Philo: uniting modalities☆26Updated 8 months ago
- FG2021: Cross Attentional AV Fusion for Dimensional Emotion Recognition☆33Updated last year
- [AAAI 2018] Memory Fusion Network for Multi-view Sequential Learning☆114Updated 5 years ago
- This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information M…☆193Updated 2 years ago
- ☆21Updated 5 years ago