thu-ml / zh-clipLinks
☆69Updated 2 years ago
Alternatives and similar repositories for zh-clip
Users that are interested in zh-clip are comparing it to the libraries listed below
Sorting:
- Chinese CLIP models with SOTA performance.☆55Updated last year
- Multimodal chatbot with computer vision capabilities integrated, our 1st-gen LMM☆101Updated last year
- TaiSu(太素)--a large-scale Chinese multimodal dataset(亿级大规模中文视觉语言预训练数据集)☆189Updated last year
- Research Code for Multimodal-Cognition Team in Ant Group☆154Updated this week
- Lion: Kindling Vision Intelligence within Large Language Models☆52Updated last year
- ☆87Updated last year
- ☆57Updated last year
- 基于baichuan-7b的开源多模态大语言模型☆73Updated last year
- Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Pre-training Dataset and Benchmarks☆296Updated last year
- official code for "Modality Curation: Building Universal Embeddings for Advanced Multimodal Information Retrieval"☆24Updated last week
- ☆112Updated 2 years ago
- ☆163Updated last year
- Precision Search through Multi-Style Inputs☆70Updated 2 months ago
- ☆15Updated last month
- Our 2nd-gen LMM☆33Updated last year
- MuLan: Adapting Multilingual Diffusion Models for 110+ Languages (无需额外训练为任意扩散模型支持多语言能力)☆135Updated 5 months ago
- ☆41Updated last month
- ☆181Updated last year
- A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.☆37Updated 10 months ago
- [ACL 2025 Oral] 🔥🔥 MegaPairs: Massive Data Synthesis for Universal Multimodal Retrieval☆203Updated last month
- Empirical Study Towards Building An Effective Multi-Modal Large Language Model☆22Updated last year
- ☆173Updated 5 months ago
- ☆80Updated last year
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆62Updated 8 months ago
- [ICCV2025] A Token-level Text Image Foundation Model for Document Understanding☆105Updated last week
- ☆28Updated last year
- Large Multimodal Model☆15Updated last year
- transformers结构的中文OFA模型☆135Updated 2 years ago
- ☆39Updated 10 months ago
- The official implementation of our paper "Cockatiel: Ensembling Synthetic and Human Preferenced Training for Detailed Video Caption"☆34Updated last month