thu-ml / zh-clipLinks
☆70Updated 2 years ago
Alternatives and similar repositories for zh-clip
Users that are interested in zh-clip are comparing it to the libraries listed below
Sorting:
- Chinese CLIP models with SOTA performance.☆57Updated 2 years ago
- TaiSu(太素)--a large-scale Chinese multimodal dataset(亿级大规模中文视觉语言预训练数据集)☆191Updated last year
- Multimodal chatbot with computer vision capabilities integrated, our 1st-gen LMM☆101Updated last year
- Research Code for Multimodal-Cognition Team in Ant Group☆165Updated 2 months ago
- ☆87Updated last year
- Lion: Kindling Vision Intelligence within Large Language Models☆51Updated last year
- Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Pre-training Dataset and Benchmarks☆302Updated last year
- ☆79Updated last year
- 基于baichuan-7b的开源多模态大语言模型☆72Updated last year
- ☆57Updated last year
- ☆113Updated 2 years ago
- ☆167Updated last year
- Empirical Study Towards Building An Effective Multi-Modal Large Language Model☆22Updated last year
- [ACL 2025 Oral] 🔥🔥 MegaPairs: Massive Data Synthesis for Universal Multimodal Retrieval☆221Updated 3 months ago
- ☆177Updated 7 months ago
- official code for "Modality Curation: Building Universal Embeddings for Advanced Multimodal Information Retrieval"☆33Updated 2 months ago
- transformers结构的中文OFA模型☆137Updated 2 years ago
- ☆15Updated 3 months ago
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆62Updated 10 months ago
- A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.☆38Updated last year
- The official repository of the dots.vlm1 instruct models proposed by rednote-hilab.☆244Updated 2 weeks ago
- Precision Search through Multi-Style Inputs☆72Updated last month
- Our 2nd-gen LMM☆34Updated last year
- Toward Universal Multimodal Embedding☆56Updated last month
- Large Multimodal Model☆15Updated last year
- WuDaoMM this is a data project☆74Updated 3 years ago
- Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.☆169Updated 2 years ago
- A Dead Simple and Modularized Multi-Modal Training and Finetune Framework. Compatible to any LLaVA/Flamingo/QwenVL/MiniGemini etc series …☆19Updated last year
- Evaluation code and datasets for the ACL 2024 paper, VISTA: Visualized Text Embedding for Universal Multi-Modal Retrieval. The original c…☆41Updated 9 months ago
- ☆54Updated 3 months ago