benywon / ChiQALinks
The implementations of various baselines in our CIKM 2022 paper: ChiQA: A Large Scale Image-based Real-World Question Answering Dataset for Multi-Modal Understanding.
☆33Updated last year
Alternatives and similar repositories for ChiQA
Users that are interested in ChiQA are comparing it to the libraries listed below
Sorting:
- Product1M☆87Updated 2 years ago
- WuDaoMM this is a data project☆74Updated 3 years ago
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆90Updated 2 years ago
- ☆66Updated last year
- All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)☆163Updated 10 months ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆135Updated 2 years ago
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆95Updated last year
- ☆39Updated last year
- ☆45Updated 2 years ago
- pytorch implementation of mvp: a multi-stage vision-language pre-training framework☆33Updated 2 years ago
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆64Updated last year
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆51Updated last year
- ☆188Updated last year
- Evaluation code and datasets for the ACL 2024 paper, VISTA: Visualized Text Embedding for Universal Multi-Modal Retrieval. The original c…☆40Updated 8 months ago
- This repo contains codes and instructions for baselines in the VLUE benchmark.