allenai / aokvqa
Official repository for the A-OKVQA dataset
☆73Updated 9 months ago
Alternatives and similar repositories for aokvqa:
Users that are interested in aokvqa are comparing it to the libraries listed below
- ☆64Updated 5 years ago
- Source code for EMNLP 2022 paper “PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models”☆48Updated 2 years ago
- ☆28Updated 3 months ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆134Updated last year
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆141Updated 9 months ago
- ICCV 2023 (Oral) Open-domain Visual Entity Recognition Towards Recognizing Millions of Wikipedia Entities☆37Updated 5 months ago
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆196Updated 10 months ago
- ☆39Updated last year
- The SVO-Probes Dataset for Verb Understanding☆31Updated 3 years ago
- Repo for paper: "Paxion: Patching Action Knowledge in Video-Language Foundation Models" Neurips 23 Spotlight☆37Updated last year
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆204Updated 2 years ago
- NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR'21)☆142Updated 6 months ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆62Updated 3 months ago
- M-HalDetect Dataset Release☆21Updated last year
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆43Updated 7 months ago
- [CVPR 2022] A large-scale public benchmark dataset for video question-answering, especially about evidence and commonsense reasoning. The…☆52Updated 7 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆45Updated last year
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆78Updated 10 months ago
- ☆67Updated last year
- ☆31Updated last year
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆37Updated 3 months ago
- NegCLIP.☆30Updated 2 years ago
- [TACL'23] VSR: A probing benchmark for spatial undersranding of vision-language models.☆112Updated last year
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆67Updated 8 months ago
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 2 years ago
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆81Updated last year
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆113Updated 2 years ago
- ☆58Updated last year
- Colorful Prompt Tuning for Pre-trained Vision-Language Models☆48Updated 2 years ago
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆18Updated 8 months ago