Muennighoff / vilio
π₯ΆVilio: State-of-the-art VL models in PyTorch & PaddlePaddle
β88Updated last year
Alternatives and similar repositories for vilio:
Users that are interested in vilio are comparing it to the libraries listed below
- β91Updated 2 years ago
- Repository containing code from team Kingsterdam for the Hateful Memes Challengeβ20Updated 2 years ago
- An implementation that downstreams pre-trained V+L models to VQA tasks. Now support: VisualBERT, LXMERT, and UNITERβ163Updated 2 years ago
- β60Updated last year
- [CVPR 2020] Transform and Tell: Entity-Aware News Image Captioningβ90Updated 9 months ago
- A collection of multimodal datasets, and visual features for VQA and captionning in pytorch. Just run "pip install multimodal"β81Updated 2 years ago
- [TACL 2021] Code and data for the framework in "Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-Laβ¦β114Updated 2 years ago
- Detecting Hate Speech in Memes Using Multimodal Deep Learning Approaches: Prize-winning solution to Hateful Memes Challenge. https://arxiβ¦β56Updated last year
- PyTorch code for EMNLP 2020 Paper "Vokenization: Improving Language Understanding with Visual Supervision"β186Updated 3 years ago
- PyTorch bottom-up attention with Detectron2β231Updated 3 years ago
- BERT + Image Captioningβ132Updated 4 years ago
- Research Code for NeurIPS 2020 Spotlight paper "Large-Scale Adversarial Training for Vision-and-Language Representation Learning": UNITERβ¦β119Updated 4 years ago
- Code of Dense Relational Captioningβ68Updated last year
- β44Updated 2 years ago
- β131Updated 2 years ago
- Code and Resources for the Transformer Encoder Reasoning Network (TERN) - https://arxiv.org/abs/2004.09144β57Updated last year
- Support extracting BUTD features for NLVR2 images.β18Updated 4 years ago
- Grid features pre-training code for visual question answeringβ268Updated 3 years ago
- β40Updated 2 years ago
- π₯ Codalab-Microsoft-COCO-Image-Captioning-Challenge 3rd place solution(06.30.21)β23Updated 2 years ago
- Repository for Multilingual-VQA task created during HuggingFace JAX/Flax community week.β34Updated 3 years ago
- Dataset and starting code for visual entailment datasetβ109Updated 2 years ago
- Implementation of the Object Relation Transformer for Image Captioningβ177Updated 4 months ago
- Show, Edit and Tell: A Framework for Editing Image Captions, CVPR 2020β80Updated 4 years ago
- Good News Everyone! - CVPR 2019β128Updated 2 years ago
- MERLOT: Multimodal Neural Script Knowledge Modelsβ223Updated 2 years ago
- A self-evident application of the VQA task is to design systems that aid blind people with sight reliant queries. The VizWiz VQA dataset β¦β15Updated last year
- β53Updated 3 years ago
- Transformer-based image captioning extension for pytorch/fairseqβ314Updated 4 years ago
- β101Updated 2 years ago