benywon / ChiQALinks
The implementations of various baselines in our CIKM 2022 paper: ChiQA: A Large Scale Image-based Real-World Question Answering Dataset for Multi-Modal Understanding.
☆33Updated last year
Alternatives and similar repositories for ChiQA
Users that are interested in ChiQA are comparing it to the libraries listed below
Sorting:
- Product1M☆87Updated 2 years ago
- WuDaoMM this is a data project☆74Updated 3 years ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆135Updated 2 years ago
- ☆38Updated last year
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 2 years ago
- pytorch implementation of mvp: a multi-stage vision-language pre-training framework☆33Updated 2 years ago
- [ECCV2022] Contrastive Vision-Language Pre-training with Limited Resources☆45Updated 2 years ago
- ☆66Updated last year
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆64Updated last year
- MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering. A comprehensive evaluation of multimodal large model multilingua…☆59Updated last month
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆95Updated last year
- ☆47Updated 9 months ago
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆90Updated 2 years ago
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆50Updated 11 months ago
- ☆64Updated last year
- ☆100Updated last year
- UNIMO: Towards Unified-Modal Understanding and Generation via Cross-Modal Contrastive Learning☆70Updated 4 years ago
- FlatNCE: A Novel Contrastive Representation Learning Objective☆90Updated 3 years ago
- ☆44Updated 2 years ago
- ☆23Updated 2 years ago
- Code for EMNLP 2022 paper “Distilled Dual-Encoder Model for Vision-Language Understanding”☆30Updated 2 years ago
- ☆104Updated 3 years ago
- Evaluation code and datasets for the ACL 2024 paper, VISTA: Visualized Text Embedding for Universal Multi-Modal Retrieval. The original c…☆38Updated 7 months ago
- ☆37Updated 11 months ago
- ☆49Updated last year
- SVIT: Scaling up Visual Instruction Tuning☆163Updated last year
- TaiSu(太素)--a large-scale Chinese multimodal dataset(亿级大规模中文视觉语言预训练数据集)☆188Updated last year
- ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration☆56Updated 2 years ago
- ☆160Updated last year
- Multitask Multilingual Multimodal Pre-training☆72Updated 2 years ago