yangjianxin1 / ClipCap-Chinese
基于ClipCap的看图说话Image Caption模型
☆296Updated 2 years ago
Alternatives and similar repositories for ClipCap-Chinese:
Users that are interested in ClipCap-Chinese are comparing it to the libraries listed below
- 中文CLIP预训练模型☆402Updated 2 years ago
- transformers结构的中文OFA模型☆126Updated 2 years ago
- Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.☆167Updated 2 years ago
- Enriching MS-COCO with Chinese sentences and tags for cross-lingual multimedia tasks☆189Updated last month
- VLE: Vision-Language Encoder (VLE: 视觉-语言多模态预训练模型)☆188Updated 2 years ago
- Cross-lingual image captioning☆85Updated 2 years ago
- Bridging Vision and Language Model☆283Updated last year
- 支持中英文双语视觉-文本对话的开源可商用多模态模型。☆364Updated last year
- 图像中文描述+视觉注意力☆185Updated 5 years ago
- ☆159Updated last year
- Update 2020☆75Updated 2 years ago
- 计算机视觉课程设计-基于Chinese-CLIP的图文检索系统☆58Updated last year
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆471Updated 2 years ago
- Search photos on Unsplash based on OpenAI's CLIP model, support search with joint image+text queries and attention visualization.☆216Updated 3 years ago
- Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Pre-training Dataset and Benchmarks☆292Updated last year
- [AAAI2021] The code of “Similarity Reasoning and Filtration for Image-Text Matching”☆213Updated 11 months ago
- Implementation of CVPR 2023 paper "Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering".☆271Updated last year
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆924Updated 11 months ago
- A project that can generate ancient poems based on pictures, including CLIP, T5, GPT2 models☆21Updated last month
- TaiSu(太素)--a large-scale Chinese multimodal dataset(亿级大规模中文视觉语言预训练数据集)☆179Updated last year
- 该项目旨在通过输入文本描述来检索与 之相匹配的图片。☆36Updated last year
- 这是一个基于Pytorch平台、Transformer框架实现的视频描述生成 (Video Captioning) 深度学习模型。 视频描述生成任务指的是:输入一个视频,输出一句描述整个视频内容的文字(前提是视频较短且可以用一句话来描述)。本repo主要目的是帮助视力障碍…☆85Updated 3 years ago
- ☆58Updated 2 years ago
- Research Code for Multimodal-Cognition Team in Ant Group☆138Updated 8 months ago
- Implementation of our CVPR2022 paper, Negative-Aware Attention Framework for Image-Text Matching.☆111Updated last year
- ☆65Updated last year
- Bling's Object detection tool☆56Updated 2 years ago
- Train a model for Image Caption from ViT and GPT pretrained model☆15Updated last year
- mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)☆223Updated last year