yangjianxin1 / ClipCap-Chinese
基于ClipCap的看图说话Image Caption模型
☆294Updated 2 years ago
Alternatives and similar repositories for ClipCap-Chinese:
Users that are interested in ClipCap-Chinese are comparing it to the libraries listed below
- 中文CLIP预训练模型☆399Updated 2 years ago
- Bridging Vision and Language Model☆280Updated last year
- transformers结构的中文OFA模型☆123Updated 2 years ago
- Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.☆167Updated 2 years ago
- VLE: Vision-Language Encoder (VLE: 视觉-语言多模态预训练模型)☆187Updated last year
- Update 2020☆75Updated 2 years ago
- Cross-lingual image captioning☆84Updated 2 years ago
- Enriching MS-COCO with Chinese sentences and tags for cross-lingual multimedia tasks☆187Updated this week
- ☆158Updated last year
- 图像中文描述+视觉注意力☆185Updated 5 years ago
- Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Pre-training Dataset and Benchmarks☆292Updated last year
- ☆239Updated 2 years ago
- 支持中英文双语视觉-文本对话的开源可商用多模态模型。☆362Updated last year
- Implementation of CVPR 2023 paper "Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering".☆271Updated last year
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆908Updated 10 months ago
- [AAAI2021] The code of “Similarity Reasoning and Filtration for Image-Text Matching”☆212Updated 10 months ago
- 这是一个基于Pytorch平台、Transformer框架实现的视频描述生成 (Video Captioning) 深度学习模型。 视频描述生成任务指的是:输入一个视频,输出一句描述整个视频内容的文字(前提是视频较短且可以用一句话来描述)。本repo主要目的是帮助视力障碍…☆83Updated 2 years ago
- ☆57Updated 2 years ago
- Search photos on Unsplash based on OpenAI's CLIP model, support search with joint image+text queries and attention visualization.☆214Updated 3 years ago
- TaiSu(太素)--a large-scale Chinese multimodal dataset(亿级大规模中文视觉语言预训练数据集)☆177Updated last year
- 计 算机视觉课程设计-基于Chinese-CLIP的图文检索系统☆54Updated last year
- 该项目旨在通过输入文本描述来检索与之相匹配的图片。☆33Updated last year
- Research Code for Multimodal-Cognition Team in Ant Group☆136Updated 7 months ago
- mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)☆222Updated last year
- Implementation of our CVPR2022 paper, Negative-Aware Attention Framework for Image-Text Matching.☆111Updated last year
- ☆62Updated last year
- X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages☆307Updated last year
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆465Updated 2 years ago
- Train a model for Image Caption from ViT and GPT pretrained model☆16Updated last year
- Implementation of 'End-to-End Transformer Based Model for Image Captioning' [AAAI 2022]☆67Updated 8 months ago