WatchTower-Liu / VLM-learningLinks
Building a VLM model starts from the basic module.
☆18Updated last year
Alternatives and similar repositories for VLM-learning
Users that are interested in VLM-learning are comparing it to the libraries listed below
Sorting:
- ☆30Updated last year
- Build a simple basic multimodal large model from scratch. 从零搭建一个简单的基础多模态大模型🤖☆47Updated last year
- Research Code for Multimodal-Cognition Team in Ant Group☆169Updated last month
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆76Updated last year
- Toward Universal Multimodal Embedding☆66Updated 3 months ago
- 【ArXiv】PDF-Wukong: A Large Multimodal Model for Efficient Long PDF Reading with End-to-End Sparse Sampling☆127Updated 5 months ago
- 一些大语言模型和多模态模型的生态,主要包括跨模态搜索、投机解码、QAT量化、多模态量化、ChatBot、OCR☆193Updated 3 months ago
- 从零到一实现了一个多模态大模型,并命名为Reyes(睿视),R:睿,eyes:眼。Reyes的参数量为8B,视觉编码器使用的是InternViT-300M-448px-V2_5,语言模型侧使用的是Qwen2.5-7B-Instruct,Reyes也通过一个两层MLP投影层连…☆26Updated 9 months ago
- ☆74Updated 6 months ago
- [ACL 2025 Oral] 🔥🔥 MegaPairs: Massive Data Synthesis for Universal Multimodal Retrieval☆234Updated 2 weeks ago
- 多模态 MM +Chat 合集☆278Updated 3 months ago
- This project aims to collect and collate various datasets for multimodal large model training, including but not limited to pre-training …☆59Updated 6 months ago
- [CVPR 2024] LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge☆152Updated 2 months ago
- [ACM MM 2025] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆95Updated this week
- AAAI 2024: Visual Instruction Generation and Correction☆93Updated last year
- WWW2025 Multimodal Intent Recognition for Dialogue Systems Challenge☆127Updated last year
- The huggingface implementation of Fine-grained Late-interaction Multi-modal Retriever.☆101Updated 5 months ago
- 使用LLaMA-Factory微调多模态大语言模型的示例代码 Demo of Finetuning Multimodal LLM with LLaMA-Factory☆53Updated last year
- Workshop on Foundation Model 1st foundation model challenge Track1 codebase (Open TransMind v1.0)☆18Updated 2 years ago
- Fine-tuning Qwen2.5-VL for vision-language tasks | Optimized for Vision understanding | LoRA & PEFT support.☆141Updated 9 months ago
- The official code for NeurIPS 2024 paper: Harmonizing Visual Text Comprehension and Generation☆129Updated last year
- 模型 llava-Qwen2-7B-Instruct-Chinese-CLIP 增强中文文字识别能力和表情包内涵识别能力,接近gpt4o、claude-3.5-sonnet的识别水平!☆27Updated last year
- 主要记录大语言大模型(LLMs) 算法(应用)工程师多模态相关知识☆250Updated last year
- ☆57Updated last year
- The official repo for “TextCoT: Zoom In for Enhanced Multimodal Text-Rich Image Understanding”.☆43Updated last year
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆208Updated 7 months ago
- Evaluation code and datasets for the ACL 2024 paper, VISTA: Visualized Text Embedding for Universal Multi-Modal Retrieval. The original c…☆43Updated last year
- New generation of CLIP with fine grained discrimination capability, ICML2025☆472Updated 3 weeks ago
- Train InternViT-6B in MMSegmentation and MMDetection with DeepSpeed☆105Updated last year
- Margin-based Vision Transformer☆55Updated last month