benywon / ChiQA
The implementations of various baselines in our CIKM 2022 paper: ChiQA: A Large Scale Image-based Real-World Question Answering Dataset for Multi-Modal Understanding.
☆33Updated 10 months ago
Alternatives and similar repositories for ChiQA:
Users that are interested in ChiQA are comparing it to the libraries listed below
- WuDaoMM this is a data project☆73Updated 2 years ago
- Product1M☆87Updated 2 years ago
- Code for EMNLP 2022 paper “Distilled Dual-Encoder Model for Vision-Language Understanding”☆30Updated last year
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 2 years ago
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆92Updated last year
- ☆43Updated last year
- ☆34Updated last year
- MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering. A comprehensive evaluation of multimodal large model multilingua…☆55Updated this week
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆136Updated last year
- ☆20Updated last year
- ☆45Updated 6 months ago
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆90Updated last year
- ☆44Updated 2 years ago
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆64Updated last year
- Multitask Multilingual Multimodal Pre-training☆71Updated 2 years ago
- ☆36Updated 8 months ago
- ☆61Updated last year
- UNIMO: Towards Unified-Modal Understanding and Generation via Cross-Modal Contrastive Learning☆69Updated 3 years ago
- Evaluation code and datasets for the ACL 2024 paper, VISTA: Visualized Text Embedding for Universal Multi-Modal Retrieval. The original c…☆35Updated 4 months ago
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆50Updated 8 months ago
- ☆95Updated last year
- A collection of visual instruction tuning datasets.☆76Updated last year
- All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)☆156Updated 7 months ago
- Code for our ACL2021 paper: "Check It Again: Progressive Visual Question Answering via Visual Entailment"☆31Updated 3 years ago
- ☆66Updated last year
- pytorch implementation of mvp: a multi-stage vision-language pre-training framework☆33Updated 2 years ago
- This project aims to collect and collate various datasets for multimodal large model training, including but not limited to pre-training …☆34Updated 5 months ago
- ☆160Updated last year
- CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training☆34Updated 3 years ago
- ☆65Updated 6 years ago