HimariO / HatefulMemesChallenge
β92Updated 2 years ago
Alternatives and similar repositories for HatefulMemesChallenge
Users that are interested in HatefulMemesChallenge are comparing it to the libraries listed below
Sorting:
- π₯ΆVilio: State-of-the-art VL models in PyTorch & PaddlePaddleβ88Updated last year
- β62Updated last year
- Repository containing code from team Kingsterdam for the Hateful Memes Challengeβ20Updated 2 years ago
- An implementation that downstreams pre-trained V+L models to VQA tasks. Now support: VisualBERT, LXMERT, and UNITERβ163Updated 2 years ago
- [TACL 2021] Code and data for the framework in "Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-Laβ¦β114Updated 3 years ago
- [CVPR 2020] Transform and Tell: Entity-Aware News Image Captioningβ90Updated last year
- Support extracting BUTD features for NLVR2 images.β18Updated 4 years ago
- β44Updated 2 years ago
- β131Updated 2 years ago
- Research Code for NeurIPS 2020 Spotlight paper "Large-Scale Adversarial Training for Vision-and-Language Representation Learning": UNITERβ¦β119Updated 4 years ago
- A collection of multimodal datasets, and visual features for VQA and captionning in pytorch. Just run "pip install multimodal"β82Updated 3 years ago
- PyTorch bottom-up attention with Detectron2β233Updated 3 years ago
- PyTorch code for EMNLP 2020 Paper "Vokenization: Improving Language Understanding with Visual Supervision"β188Updated 4 years ago
- Detecting Hate Speech in Memes Using Multimodal Deep Learning Approaches: Prize-winning solution to Hateful Memes Challenge. https://arxiβ¦β60Updated last year
- Grid features pre-training code for visual question answeringβ269Updated 3 years ago
- Multitask Multilingual Multimodal Pre-trainingβ71Updated 2 years ago
- PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)β372Updated last year
- β102Updated 3 years ago
- Code and data for ImageCoDe, a contextual vison-and-language benchmarkβ39Updated last year
- The source code of ACL 2020 paper: "Cross-Modality Relevance for Reasoning on Language and Vision"β27Updated 4 years ago
- Repository for Multilingual-VQA task created during HuggingFace JAX/Flax community week.β34Updated 3 years ago
- Code, Models and Datasets for OpenViDial Datasetβ131Updated 3 years ago
- Implementation of ConceptBert: Concept-Aware Representation for Visual Question Answeringβ29Updated last year
- Research Code for NeurIPS 2020 Spotlight paper "Large-Scale Adversarial Training for Vision-and-Language Representation Learning": LXMERTβ¦β21Updated 4 years ago
- Code of Dense Relational Captioningβ69Updated 2 years ago
- [CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Biasβ121Updated 3 years ago
- Code for the paper "VisualBERT: A Simple and Performant Baseline for Vision and Language"β536Updated 2 years ago
- β53Updated 3 years ago
- Visual Question Answering Paper List.β53Updated 2 years ago
- Code and Resources for the Transformer Encoder Reasoning Network (TERN) - https://arxiv.org/abs/2004.09144β58Updated last year