codefuse-ai / CodeFuse-MFT-VLMLinks
☆39Updated 7 months ago
Alternatives and similar repositories for CodeFuse-MFT-VLM
Users that are interested in CodeFuse-MFT-VLM are comparing it to the libraries listed below
Sorting:
- ☆79Updated last year
- GLM Series Edge Models☆141Updated 3 months ago
- ☆56Updated last year
- Our 2nd-gen LMM☆33Updated last year
- Empirical Study Towards Building An Effective Multi-Modal Large Language Model☆22Updated last year
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆60Updated 7 months ago
- Multimodal chatbot with computer vision capabilities integrated, our 1st-gen LMM☆101Updated last year
- Search, organize, discover anything!☆48Updated last year
- A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.☆37Updated 8 months ago
- Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.☆236Updated 3 months ago
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆38Updated last year
- zero零训练llm调参☆31Updated last year
- ☆29Updated 9 months ago
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆126Updated 7 months ago
- ☆68Updated last year
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆84Updated 7 months ago
- ☆173Updated 4 months ago
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆59Updated 9 months ago
- SUS-Chat: Instruction tuning done right☆48Updated last year
- 基于baichuan-7b的开源多模态大语言模型☆73Updated last year
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models☆62Updated 2 weeks ago
- ☆28Updated last year
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆45Updated 3 months ago
- ☆73Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆132Updated 11 months ago
- Research Code for Multimodal-Cognition Team in Ant Group☆147Updated 2 weeks ago
- LongQLoRA: Extent Context Length of LLMs Efficiently☆165Updated last year
- mllm-npu: training multimodal large language models on Ascend NPUs☆90Updated 9 months ago
- 本项目使用LLaVA 1.6多模态模型实现以文搜图和以图搜图功能。☆23Updated last year
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆57Updated 6 months ago