PaulLerner / ViQuAELinks
Source code and data used in the papers ViQuAE (Lerner et al., SIGIR'22), Multimodal ICT (Lerner et al., ECIR'23) and Cross-modal Retrieval (Lerner et al., ECIR'24)
☆38Updated last year
Alternatives and similar repositories for ViQuAE
Users that are interested in ViQuAE are comparing it to the libraries listed below
Sorting:
- ☆42Updated 2 years ago
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆25Updated last year
- Research code for "KAT: A Knowledge Augmented Transformer for Vision-and-Language"☆69Updated 3 years ago
- ☆148Updated 3 years ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆134Updated 2 years ago
- ☆68Updated 2 years ago
- ☆61Updated last year
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆53Updated last year
- ☆108Updated 3 years ago
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 3 years ago
- a multimodal retrieval dataset☆24Updated 2 years ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆176Updated last year
- Official Code of IdealGPT☆35Updated 2 years ago
- VaLM: Visually-augmented Language Modeling. ICLR 2023.☆56Updated 2 years ago
- [IEEE TMM 2025 & ACL 2024 Findings] LLMs as Bridges: Reformulating Grounded Multimodal Named Entity Recognition☆36Updated 5 months ago
- ICCV 2023 (Oral) Open-domain Visual Entity Recognition Towards Recognizing Millions of Wikipedia Entities☆43Updated 7 months ago
- The released data for paper "Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models".☆34Updated 2 years ago
- Code for EMNLP 2022 paper “Distilled Dual-Encoder Model for Vision-Language Understanding”☆31Updated 2 years ago
- A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models (ACL 2022)☆43Updated 3 years ago
- CVPR 2022 (Oral) Pytorch Code for Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment☆22Updated 3 years ago
- ☆67Updated 2 years ago
- ☆25Updated 3 years ago
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆44Updated last year
- ☆19Updated 2 years ago
- MuKEA: Multimodal Knowledge Extraction and Accumulation for Knowledge-based Visual Question Answering☆100Updated 2 years ago
- FaithScore: Fine-grained Evaluations of Hallucinations in Large Vision-Language Models☆32Updated last month
- Official repository for the A-OKVQA dataset☆108Updated last year
- CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training☆34Updated 4 years ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆156Updated last year
- Data and code for ACL 2023 paper "RobuT: A Systematic Study of Table QA Robustness Against Human-Annotated Adversarial Perturbations"☆15Updated last year