BAAI-WuDao / WuDaoMM
WuDaoMM this is a data project
☆66Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for WuDaoMM
- ☆63Updated 10 months ago
- ☆157Updated last year
- ☆59Updated last year
- Product1M☆86Updated 2 years ago
- The implementations of various baselines in our CIKM 2022 paper: ChiQA: A Large Scale Image-based Real-World Question Answering Dataset f…☆30Updated 5 months ago
- ☆57Updated last year
- ☆32Updated 2 years ago
- ☆19Updated 2 years ago
- Bling's Object detection tool☆55Updated last year
- UNIMO: Towards Unified-Modal Understanding and Generation via Cross-Modal Contrastive Learning☆69Updated 3 years ago
- TaiSu(太素)--a large-scale Chinese multimodal dataset(亿级大规模中文视觉语言预训练数据集)☆175Updated 11 months ago
- the world's first large-scale multi-modal short-video encyclopedia, where the primitive units are items, aspects, and short videos.☆60Updated 11 months ago
- Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.☆164Updated 2 years ago
- pytorch implementation of mvp: a multi-stage vision-language pre-training framework☆33Updated last year
- Ladder Side-Tuning在CLUE上的简单尝试☆19Updated 2 years ago
- ☆14Updated 7 months ago
- OFA-Compress is a unified framework which provides OFA model finetuning, distillation and inference capabilities in Huggingface version, …☆27Updated 2 years ago
- Source code for our EMNLP'21 paper 《Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning》☆56Updated 3 years ago
- The Document of WenLan API, which was used to obtain image and text feature.☆37Updated last year
- Touchstone: Evaluating Vision-Language Models by Language Models☆76Updated 9 months ago
- NTK scaled version of ALiBi position encoding in Transformer.☆66Updated last year
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆87Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆19Updated last year
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 2 years ago
- Multitask Multilingual Multimodal Pre-training☆70Updated last year
- ☆66Updated last year
- Bridging Vision and Language Model☆279Updated last year
- ☆21Updated 11 months ago
- ACL'2024 (Findings): TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆48Updated last year