lupantech / IconQA
Data and code for NeurIPS 2021 Paper "IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning".
☆51Updated last year
Alternatives and similar repositories for IconQA:
Users that are interested in IconQA are comparing it to the libraries listed below
- Source code for the paper "Prefix Language Models are Unified Modal Learners"☆43Updated last year
- VaLM: Visually-augmented Language Modeling. ICLR 2023.☆56Updated last year
- The SVO-Probes Dataset for Verb Understanding☆31Updated 3 years ago
- Code for 'Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality', EMNLP 2022☆30Updated last year
- PyTorch code for Improving Commonsense in Vision-Language Models via Knowledge Graph Riddles (DANCE)☆23Updated 2 years ago
- Research code for "KAT: A Knowledge Augmented Transformer for Vision-and-Language"☆63Updated 2 years ago
- [TACL'23] VSR: A probing benchmark for spatial undersranding of vision-language models.☆112Updated last year
- Code for paper "Point and Ask: Incorporating Pointing into Visual Question Answering"☆18Updated 2 years ago
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆43Updated 8 months ago
- ☆67Updated last year
- ☆32Updated 2 years ago
- Official Code for ACL 2023 Outstanding Paper: World-to-Words: Grounded Open Vocabulary Acquisition through Fast Mapping in Vision-Languag…☆30Updated last year
- [TMLR 2024] Official implementation of "Sight Beyond Text: Multi-Modal Training Enhances LLMs in Truthfulness and Ethics"☆19Updated last year
- CVPR 2022 (Oral) Pytorch Code for Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment☆22Updated 2 years ago
- Code for CVPR 2023 paper "Procedure-Aware Pretraining for Instructional Video Understanding"☆48Updated 3 weeks ago
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆18Updated 8 months ago
- The released data for paper "Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models".☆32Updated last year
- Recursive Visual Programming (ECCV 2024)☆17Updated 3 months ago
- Official repository for the General Robust Image Task (GRIT) Benchmark☆51Updated last year
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆113Updated 2 years ago
- Multimodal-Procedural-Planning☆92Updated last year
- ☆22Updated 2 years ago
- Official Code of IdealGPT☆34Updated last year
- This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception o…☆21Updated 2 months ago
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 2 years ago
- Code for LaMPP: Language Models as Probabilistic Priors for Perception and Action☆35Updated last year
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆47Updated last year
- Command-line tool for downloading and extending the RedCaps dataset.☆46Updated last year
- Code, data, models for the Sherlock corpus☆55Updated 2 years ago