HimariO / HatefulMemesChallengeLinks
β93Updated 2 years ago
Alternatives and similar repositories for HatefulMemesChallenge
Users that are interested in HatefulMemesChallenge are comparing it to the libraries listed below
Sorting:
- π₯ΆVilio: State-of-the-art VL models in PyTorch & PaddlePaddleβ90Updated 2 years ago
- An implementation that downstreams pre-trained V+L models to VQA tasks. Now support: VisualBERT, LXMERT, and UNITERβ165Updated 2 years ago
- [TACL 2021] Code and data for the framework in "Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-Laβ¦β114Updated 3 years ago
- Support extracting BUTD features for NLVR2 images.β18Updated 5 years ago
- β66Updated 2 years ago
- Research Code for NeurIPS 2020 Spotlight paper "Large-Scale Adversarial Training for Vision-and-Language Representation Learning": UNITERβ¦β119Updated 4 years ago
- PyTorch code for "Unifying Vision-and-Language Tasks via Text Generation" (ICML 2021)β372Updated 2 years ago
- β44Updated 3 months ago
- PyTorch code for EMNLP 2020 Paper "Vokenization: Improving Language Understanding with Visual Supervision"β190Updated 4 years ago
- Code for the paper "VisualBERT: A Simple and Performant Baseline for Vision and Language"β536Updated 2 years ago
- β106Updated 3 years ago
- [CVPR 2020] Transform and Tell: Entity-Aware News Image Captioningβ92Updated last year
- Supervised Multimodal Bitransformers for Classifying Images and Textβ256Updated 4 years ago
- Research Code for NeurIPS 2020 Spotlight paper "Large-Scale Adversarial Training for Vision-and-Language Representation Learning": LXMERTβ¦β21Updated 4 years ago
- project page for VinVLβ358Updated 2 years ago
- Repository containing code from team Kingsterdam for the Hateful Memes Challengeβ22Updated 2 years ago
- A collection of multimodal datasets, and visual features for VQA and captionning in pytorch. Just run "pip install multimodal"β83Updated 3 years ago
- β23Updated last year
- Multitask Multilingual Multimodal Pre-trainingβ71Updated 2 years ago
- β131Updated 2 years ago
- [CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Biasβ125Updated 3 years ago
- β53Updated 3 years ago
- Grid features pre-training code for visual question answeringβ269Updated 4 years ago
- The source code of ACL 2020 paper: "Cross-Modality Relevance for Reasoning on Language and Vision"β27Updated 4 years ago
- [NeurIPS'20-Competition] Detecting Hate Speech in Memes Using Multimodal Deep Learning Approaches: Prize-winning solution to Hateful Memeβ¦β62Updated last year
- METER: A Multimodal End-to-end TransformER Frameworkβ373Updated 2 years ago
- Code and data for ImageCoDe, a contextual vison-and-language benchmarkβ41Updated last year
- Vision-Language Pre-training for Image Captioning and Question Answeringβ425Updated 3 years ago
- VisualCOMET: Reasoning about the Dynamic Context of a Still Imageβ88Updated 2 years ago
- PyTorch bottom-up attention with Detectron2β235Updated 3 years ago