li-xirong / coco-cnLinks
Enriching MS-COCO with Chinese sentences and tags for cross-lingual multimedia tasks
☆196Updated 3 months ago
Alternatives and similar repositories for coco-cn
Users that are interested in coco-cn are comparing it to the libraries listed below
Sorting:
- Cross-lingual image captioning☆87Updated 3 years ago
- Bridging Vision and Language Model☆283Updated 2 years ago
- ☆160Updated last year
- ☆66Updated last year
- transformers结构的中文OFA模型☆135Updated 2 years ago
- ☆59Updated 2 years ago
- Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.☆167Updated 2 years ago
- Bling's Object detection tool☆56Updated 2 years ago
- TaiSu(太素)--a large-scale Chinese multimodal dataset(亿级大规模中文视觉语言预训练数据集)☆183Updated last year
- WuDaoMM this is a data project☆73Updated 3 years ago
- ☆69Updated this week
- ☆244Updated 2 years ago
- ☆188Updated last year
- [AAAI2021] The code of “Similarity Reasoning and Filtration for Image-Text Matching”☆215Updated last year
- Research code for EMNLP 2020 paper "HERO: Hierarchical Encoder for Video+Language Omni-representation Pre-training"☆232Updated 3 years ago
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆90Updated last year
- ☆32Updated 2 years ago
- Image Caption metrics: Bleu、Cider、Meteor、Rouge、Spice☆110Updated 6 years ago
- 基于ClipCap的看图说话Image Caption模型☆302Updated 3 years ago
- 图像中文描述☆99Updated 6 years ago
- Product1M☆87Updated 2 years ago
- [CVPR'21 Oral] Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning☆208Updated 2 years ago
- ☆19Updated 3 years ago
- The Document of WenLan API, which was used to obtain image and text feature.☆37Updated 2 years ago
- project page for VinVL☆355Updated last year
- Code accompanying the paper "Fine-grained Video-Text Retrieval with Hierarchical Graph Reasoning".☆211Updated 4 years ago
- code for fluency-guided cross-lingual image captioning☆31Updated 7 years ago
- Search photos on Unsplash based on OpenAI's CLIP model, support search with joint image+text queries and attention visualization.☆222Updated 3 years ago
- The implementations of various baselines in our CIKM 2022 paper: ChiQA: A Large Scale Image-based Real-World Question Answering Dataset f…☆33Updated last year
- Position Focused Attention Network for Image-Text Matching☆69Updated 5 years ago