benywon / ChiQALinks
The implementations of various baselines in our CIKM 2022 paper: ChiQA: A Large Scale Image-based Real-World Question Answering Dataset for Multi-Modal Understanding.
☆33Updated last year
Alternatives and similar repositories for ChiQA
Users that are interested in ChiQA are comparing it to the libraries listed below
Sorting:
- WuDaoMM this is a data project☆74Updated 3 years ago
- Product1M☆87Updated 2 years ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆135Updated 2 years ago
- All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)☆165Updated last year
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆98Updated last year
- Enriching MS-COCO with Chinese sentences and tags for cross-lingual multimedia tasks☆206Updated 7 months ago
- ☆66Updated last year
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆91Updated 2 years ago
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 3 years ago
- ☆188Updated last year
- ☆167Updated last year
- ☆46Updated 3 years ago
- ☆92Updated last week
- [ECCV2022] Contrastive Vision-Language Pre-training with Limited Resources☆45Updated 3 years ago
- [CVPR'21 Oral] Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning☆208Updated 3 years ago
- ☆48Updated last year
- ☆100Updated last year
- TaiSu(太素)--a large-scale Chinese multimodal dataset(亿级大规模中文视觉语言预训练数据集)☆191Updated last year
- ☆40Updated 2 years ago
- Code for our ACL2021 paper: "Check It Again: Progressive Visual Question Answering via Visual Entailment"☆31Updated 3 years ago
- A collection of visual instruction tuning datasets.☆76Updated last year
- ☆61Updated 2 years ago
- ☆56Updated 3 months ago
- Bridging Vision and Language Model☆285Updated 2 years ago
- ☆106Updated 3 years ago
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆53Updated last year
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆64Updated last year
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆287Updated last year
- Evaluation code and datasets for the ACL 2024 paper, VISTA: Visualized Text Embedding for Universal Multi-Modal Retrieval. The original c…☆42Updated 10 months ago
- code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022☆266Updated last year