[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型
☆9,879Sep 22, 2025Updated 5 months ago
Alternatives and similar repositories for InternVL
Users that are interested in InternVL are comparing it to the libraries listed below
Sorting:
- Qwen3-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.☆18,671Jan 30, 2026Updated last month
- ☆4,591Sep 14, 2025Updated 6 months ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,578Aug 12, 2024Updated last year
- The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.☆6,572Aug 7, 2024Updated last year
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions☆2,922May 26, 2025Updated 9 months ago
- A Gemini 2.5 Flash Level MLLM for Vision, Speech, and Full-Duplex Multimodal Live Streaming on Your Phone☆24,094Mar 7, 2026Updated last week
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks☆3,894Mar 12, 2026Updated last week
- Latest Advances on Multimodal Large Language Models☆17,466Mar 12, 2026Updated last week
- GPT4V-level open-source multi-modal model based on Llama3-8B☆2,434Mar 3, 2025Updated last year
- Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5, InternLM3, Llama4, ...) and 300+ MLLMs (Qwen3-VL, …☆13,120Updated this week
- Solve Visual Understanding with Reinforced VLMs☆5,865Mar 12, 2026Updated last week
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,778Mar 12, 2026Updated last week
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,189Nov 18, 2024Updated last year
- a state-of-the-art-level open visual language model | 多模态预训练模型☆6,728May 29, 2024Updated last year
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,694Updated this week
- Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)☆68,351Mar 10, 2026Updated last week
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,990Nov 7, 2025Updated 4 months ago
- [ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"☆9,823Aug 12, 2024Updated last year
- An open source implementation of CLIP.☆13,496Mar 12, 2026Updated last week
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆2,219Dec 15, 2025Updated 3 months ago
- A Next-Generation Training Engine Built for Ultra-Large MoE Models☆5,104Updated this week
- Witness the aha moment of VLM with less than $3.☆4,035May 19, 2025Updated 10 months ago
- Qwen3 is the large language model series developed by Qwen team, Alibaba Cloud.☆26,899Jan 9, 2026Updated 2 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆73,479Updated this week
- Fast and memory-efficient exact attention☆22,832Updated this week
- Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and …☆17,464Sep 5, 2024Updated last year
- EVA Series: Visual Representation Fantasies from BAAI☆2,652Aug 1, 2024Updated last year
- Official repo for "Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models"☆3,334May 4, 2024Updated last year
- CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image☆32,861Feb 18, 2026Updated last month
- Open-Sora: Democratizing Efficient Video Production for All☆28,687Apr 30, 2025Updated 10 months ago
- 【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection☆3,463Dec 3, 2024Updated last year
- Official release of InternLM series (InternLM, InternLM2, InternLM2.5, InternLM3).☆7,165Oct 30, 2025Updated 4 months ago
- Next-Token Prediction is All You Need☆2,370Jan 12, 2026Updated 2 months ago
- EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL☆4,721Mar 10, 2026Updated last week
- The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained mode…☆18,681Mar 5, 2026Updated 2 weeks ago
- PyTorch code and models for the DINOv2 self-supervised learning method.☆12,505Updated this week
- DeepSeek-VL: Towards Real-World Vision-Language Understanding☆4,080Apr 24, 2024Updated last year
- Official repository of 'Visual-RFT: Visual Reinforcement Fine-Tuning' & 'Visual-ARFT: Visual Agentic Reinforcement Fine-Tuning'’☆2,312Oct 29, 2025Updated 4 months ago
- A fork to add multimodal model training to open-r1☆1,503Feb 8, 2025Updated last year