yaohungt / Multimodal-Transformer
[ACL'19] [PyTorch] Multimodal Transformer
☆868Updated 2 years ago
Alternatives and similar repositories for Multimodal-Transformer:
Users that are interested in Multimodal-Transformer are comparing it to the libraries listed below
- This repository contains various models targetting multimodal representation learning, multimodal fusion for downstream tasks such as mul…☆822Updated 2 years ago
- ☆199Updated 3 years ago
- [NeurIPS 2021] Multiscale Benchmarks for Multimodal Representation Learning☆534Updated last year
- This is the repository for "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors", Liu and Shen, et. al. ACL 2018☆262Updated 4 years ago
- Attention-based multimodal fusion for sentiment analysis☆347Updated last year
- MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis☆228Updated 2 years ago
- Pytorch Implementation of Tensor Fusion Networks for multimodal sentiment analysis.☆187Updated 5 years ago
- MMSA is a unified framework for Multimodal Sentiment Analysis.☆776Updated 2 months ago
- Codes for paper "Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis"☆207Updated 2 years ago
- This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information M…☆178Updated 2 years ago
- A curated list of Multimodal Related Research.☆1,343Updated last year
- PyTorch code for EMNLP 2019 paper "LXMERT: Learning Cross-Modality Encoder Representations from Transformers".☆947Updated 2 years ago
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"☆1,447Updated last year
- [AAAI 2018] Memory Fusion Network for Multi-view Sequential Learning☆114Updated 4 years ago
- Paper List for Multimodal Sentiment Analysis☆99Updated 4 years ago
- A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis☆123Updated last month
- Multi Task Vision and Language☆811Updated 3 years ago
- This is a short tutorial for using the CMU-MultimodalSDK.☆84Updated 6 years ago
- ☆236Updated last year
- BLOCK (AAAI 2019), with a multimodal fusion library for deep learning models☆350Updated 5 years ago
- Code for ICLR 2020 paper "VL-BERT: Pre-training of Generic Visual-Linguistic Representations".☆740Updated last year
- Recent Advances in Vision and Language PreTrained Models (VL-PTMs)☆1,152Updated 2 years ago
- ☆168Updated 5 years ago
- A Tool for extracting multimodal features from videos.☆162Updated 2 years ago
- Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"☆80Updated 3 years ago
- Code for the paper "VisualBERT: A Simple and Performant Baseline for Vision and Language"☆533Updated last year
- Deep Modular Co-Attention Networks for Visual Question Answering☆451Updated 4 years ago
- Research code for ECCV 2020 paper "UNITER: UNiversal Image-TExt Representation Learning"☆792Updated 3 years ago
- METER: A Multimodal End-to-end TransformER Framework☆368Updated 2 years ago
- ☆184Updated last year