HimariO / HatefulMemesChallengeLinks
β93Updated 2 years ago
Alternatives and similar repositories for HatefulMemesChallenge
Users that are interested in HatefulMemesChallenge are comparing it to the libraries listed below
Sorting:
- π₯ΆVilio: State-of-the-art VL models in PyTorch & PaddlePaddleβ90Updated 2 years ago
- An implementation that downstreams pre-trained V+L models to VQA tasks. Now support: VisualBERT, LXMERT, and UNITERβ164Updated 2 years ago
- β66Updated last year
- β131Updated 2 years ago
- [TACL 2021] Code and data for the framework in "Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-Laβ¦β114Updated 3 years ago
- PyTorch code for EMNLP 2020 Paper "Vokenization: Improving Language Understanding with Visual Supervision"β189Updated 4 years ago
- β45Updated last month
- Code for the paper "VisualBERT: A Simple and Performant Baseline for Vision and Language"β536Updated 2 years ago
- β104Updated 3 years ago
- [CVPR 2020] Transform and Tell: Entity-Aware News Image Captioningβ91Updated last year
- PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)β371Updated last year
- Support extracting BUTD features for NLVR2 images.β18Updated 4 years ago
- A collection of multimodal datasets, and visual features for VQA and captionning in pytorch. Just run "pip install multimodal"β82Updated 3 years ago
- Research Code for NeurIPS 2020 Spotlight paper "Large-Scale Adversarial Training for Vision-and-Language Representation Learning": UNITERβ¦β119Updated 4 years ago
- β16Updated 3 years ago
- Multitask Multilingual Multimodal Pre-trainingβ72Updated 2 years ago
- project page for VinVLβ356Updated last year
- Implementation of ConceptBert: Concept-Aware Representation for Visual Question Answeringβ30Updated last year
- [CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Biasβ122Updated 3 years ago
- pre-trained vision and language model summaryβ13Updated 4 years ago
- PyTorch bottom-up attention with Detectron2β233Updated 3 years ago
- Supervised Multimodal Bitransformers for Classifying Images and Textβ256Updated 4 years ago
- β26Updated 3 years ago
- Code and Resources for the Transformer Encoder Reasoning Network (TERN) - https://arxiv.org/abs/2004.09144β58Updated last year
- The source code of ACL 2020 paper: "Cross-Modality Relevance for Reasoning on Language and Vision"β27Updated 4 years ago
- Code and data for "Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning" (EMNLP 2021).β28Updated 3 years ago
- Grid features pre-training code for visual question answeringβ269Updated 3 years ago
- Code of Dense Relational Captioningβ69Updated 2 years ago
- A reading list of papers about Visual Question Answering.β33Updated 2 years ago
- MERLOT: Multimodal Neural Script Knowledge Modelsβ224Updated 3 years ago