rpeloff / multimodal_one_shot_learningLinks
Code recipe for "Multimodal One-Shot Learning of Speech and Images"
☆11Updated 7 years ago
Alternatives and similar repositories for multimodal_one_shot_learning
Users that are interested in multimodal_one_shot_learning are comparing it to the libraries listed below
Sorting:
- [ICLR 2019] Learning Factorized Multimodal Representations☆67Updated 5 years ago
- ☆48Updated 6 years ago
- Group Gated Fusion on Attention-based Bidirectional Alignment for Multimodal Emotion Recognition☆14Updated 3 years ago
- ☆20Updated 3 years ago
- ☆67Updated 6 years ago
- Multimodal classification solution for the SIGIR eCOM using Co-attention and transformer language models☆19Updated 5 years ago
- Multi-modal Multi-label Emotion Recognition with Heterogeneous Hierarchical Message Passing☆18Updated 3 years ago
- Philo: uniting modalities☆26Updated 10 months ago
- PyTorch Implementation on Paper [CVPR2021]Distilling Audio-Visual Knowledge by Compositional Contrastive Learning☆89Updated 4 years ago
- Code for NAACL 2021 paper: MTAG: Modal-Temporal Attention Graph for Unaligned Human Multimodal Language Sequences☆42Updated 2 years ago
- [AAAI 2018] Memory Fusion Network for Multi-view Sequential Learning☆113Updated 5 years ago
- Source code for training Gated Multimodal Units on MM-IMDb dataset☆100Updated 2 years ago
- Code for the AVLnet (Interspeech 2021) and Cascaded Multilingual (Interspeech 2021) papers.☆53Updated 3 years ago
- DeepCU: Integrating Both Common and Unique Latent Information for Multimodal Sentiment Analysis, IJCAI-19☆19Updated 6 years ago
- Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition☆31Updated 5 years ago
- This is the repository for "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors", Liu and Shen, et. al. ACL 2018☆270Updated 5 years ago
- Pytorch implementation of 'See, Hear, and Read: Deep Aligned Representations'☆33Updated 7 years ago
- Multi-model analysis of sentiment and emotion in multi-speaker conversations.☆27Updated 2 years ago
- Accompany code to reproduce the baselines of the International Multimodal Sentiment Analysis Challenge (MuSe 2020).☆16Updated 3 years ago
- Implementation of the paper "Real-Time Emotion Recognition via Attention Gated Hierarchical Memory Network" in AAAI-2020.☆31Updated 3 years ago
- ☆12Updated 8 years ago
- Generalized cross-modal NNs; new audiovisual benchmark (IEEE TNNLS 2019)☆30Updated 5 years ago
- Pre-training Cross-modal Transformer for Audio-and-Language Representations☆38Updated 4 years ago
- codes for: Modality to Modality Translation: An Adversarial Representation Learning and Graph Fusion Network for Multimodal Fusion☆48Updated 4 years ago
- Deep Multimodal Multilinear Fusion with High-order Polynomial Pooling☆26Updated 6 years ago
- Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"☆81Updated 4 years ago
- Pytorch Implementation of Tensor Fusion Networks for multimodal sentiment analysis.☆195Updated 5 years ago
- Adversarial Unsupervised Domain Adaptation for Acoustic Scene Classification☆37Updated 7 years ago
- FG2021: Cross Attentional AV Fusion for Dimensional Emotion Recognition☆33Updated last year
- Implementation of "Audio Retrieval with Natural Language Queries", INTERSPEECH 2021, PyTorch☆26Updated 2 years ago