QwenLM / Qwen-VL
The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.
☆5,055Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for Qwen-VL
- a state-of-the-art-level open visual language model | 多模态预训练模型☆6,096Updated 5 months ago
- Use PEFT or Full-parameter to finetune 400+ LLMs or 100+ MLLMs. (LLM: Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, D…☆4,289Updated this week
- An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)☆3,977Updated last week
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆6,055Updated this week
- InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output☆2,525Updated last month
- GPT4V-level open-source multi-modal model based on Llama3-8B☆2,122Updated 2 months ago
- Qwen2-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.☆3,153Updated last month
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆4,669Updated this week
- OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, …☆4,141Updated this week
- Official release of InternLM2.5 base and chat models. 1M context support☆6,482Updated this week
- ☆2,898Updated last month
- Retrieval and Retrieval-augmented LLMs☆7,613Updated this week
- Mixture-of-Experts for Large Vision-Language Models☆1,989Updated 6 months ago
- mPLUG-Owl: The Powerful Multi-modal Large Language Model Family☆2,324Updated last month
- An Open-source Toolkit for LLM Development☆2,721Updated 5 months ago
- Agent framework and applications built upon Qwen>=2.0, featuring Function Calling, Code Interpreter, RAG, and Chrome extension.☆3,505Updated last month
- 【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection☆3,003Updated last month
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆20,286Updated 3 months ago
- Code and models for NExT-GPT: Any-to-Any Multimodal Large Language Model☆3,307Updated 2 weeks ago
- Emu Series: Generative Multimodal Models from BAAI☆1,662Updated last month
- LAVIS - A One-stop Library for Language-Vision Intelligence☆9,943Updated this week
- [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding☆2,806Updated 5 months ago
- DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model☆3,597Updated last month
- A series of large language models developed by Baichuan Intelligent Technology☆4,092Updated last week
- mPLUG-DocOwl: Modularized Multimodal Large Language Model for Document Understanding☆1,593Updated last month
- An open-source framework for training large multimodal models.☆3,750Updated 2 months ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,760Updated 8 months ago
- ModelScope-Agent: An agent framework connecting models in ModelScope with the world☆2,722Updated last week
- DeepSeek-VL: Towards Real-World Vision-Language Understanding☆2,077Updated 6 months ago
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,638Updated 3 months ago