rpeloff / multimodal_one_shot_learningLinks
Code recipe for "Multimodal One-Shot Learning of Speech and Images"
☆11Updated 7 years ago
Alternatives and similar repositories for multimodal_one_shot_learning
Users that are interested in multimodal_one_shot_learning are comparing it to the libraries listed below
Sorting:
- Group Gated Fusion on Attention-based Bidirectional Alignment for Multimodal Emotion Recognition☆14Updated 3 years ago
- ☆48Updated 6 years ago
- [ICLR 2019] Learning Factorized Multimodal Representations☆67Updated 5 years ago
- ☆20Updated 3 years ago
- ☆67Updated 6 years ago
- Multimodal classification solution for the SIGIR eCOM using Co-attention and transformer language models☆19Updated 5 years ago
- Multi-modal Multi-label Emotion Recognition with Heterogeneous Hierarchical Message Passing☆18Updated 3 years ago
- Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"☆81Updated 4 years ago
- FG2021: Cross Attentional AV Fusion for Dimensional Emotion Recognition☆33Updated last year
- PyTorch Implementation on Paper [CVPR2021]Distilling Audio-Visual Knowledge by Compositional Contrastive Learning☆89Updated 4 years ago
- Pytorch implementation of 'See, Hear, and Read: Deep Aligned Representations'☆33Updated 7 years ago
- Accompany code to reproduce the baselines of the International Multimodal Sentiment Analysis Challenge (MuSe 2020).☆16Updated 3 years ago
- [AAAI 2018] Memory Fusion Network for Multi-view Sequential Learning☆113Updated 5 years ago
- DeepCU: Integrating Both Common and Unique Latent Information for Multimodal Sentiment Analysis, IJCAI-19☆19Updated 6 years ago
- Philo: uniting modalities☆26Updated 9 months ago
- Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition☆31Updated 5 years ago
- Source code for training Gated Multimodal Units on MM-IMDb dataset☆100Updated 2 years ago
- Generalized cross-modal NNs; new audiovisual benchmark (IEEE TNNLS 2019)☆30Updated 5 years ago
- Code for the AVLnet (Interspeech 2021) and Cascaded Multilingual (Interspeech 2021) papers.☆53Updated 3 years ago
- Code for NAACL 2021 paper: MTAG: Modal-Temporal Attention Graph for Unaligned Human Multimodal Language Sequences☆42Updated 2 years ago
- Implementation of the paper "Real-Time Emotion Recognition via Attention Gated Hierarchical Memory Network" in AAAI-2020.☆31Updated 3 years ago
- Modality-Transferable-MER, multimodal emotion recognition model with zero-shot and few-shot abilities.☆66Updated 4 years ago
- A survey of deep multimodal emotion recognition.☆54Updated 3 years ago
- Code and dataset of "MEmoR: A Dataset for Multimodal Emotion Reasoning in Videos" in MM'20.☆55Updated 2 years ago
- Multimodal Adversarial Network for Cross-modal Retrieval (PyTorch Code)☆30Updated 5 years ago
- Pytorch Implementation of Tensor Fusion Networks for multimodal sentiment analysis.