[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型
☆9,879Sep 22, 2025Updated 5 months ago
Alternatives and similar repositories for InternVL
Users that are interested in InternVL are comparing it to the libraries listed below
Sorting:
- Qwen3-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.☆18,585Jan 30, 2026Updated last month
- ☆4,582Sep 14, 2025Updated 5 months ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,543Aug 12, 2024Updated last year
- The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.☆6,559Aug 7, 2024Updated last year
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions☆2,922May 26, 2025Updated 9 months ago
- A Gemini 2.5 Flash Level MLLM for Vision, Speech, and Full-Duplex Multimodal Live Streaming on Your Phone☆24,094Updated this week
- GPT4V-level open-source multi-modal model based on Llama3-8B☆2,431Mar 3, 2025Updated last year
- Latest Advances on Multimodal Large Language Models☆17,416Mar 6, 2026Updated last week
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks☆3,894Updated this week
- Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5, InternLM3, Llama4, ...) and 300+ MLLMs (Qwen3-VL, …☆12,956Updated this week
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,778Updated this week
- a state-of-the-art-level open visual language model | 多模态预训练模型☆6,724May 29, 2024Updated last year
- Solve Visual Understanding with Reinforced VLMs☆5,865Updated this week
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,183Nov 18, 2024Updated last year
- Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)☆68,351Updated this week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,670Updated this week
- [ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"☆9,823Aug 12, 2024Updated last year
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,988Nov 7, 2025Updated 4 months ago
- A Next-Generation Training Engine Built for Ultra-Large MoE Models☆5,095Updated this week
- An open source implementation of CLIP.☆13,496Updated this week
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆2,208Dec 15, 2025Updated 2 months ago
- Qwen3 is the large language model series developed by Qwen team, Alibaba Cloud.☆26,852Jan 9, 2026Updated 2 months ago
- Witness the aha moment of VLM with less than $3.☆4,036May 19, 2025Updated 9 months ago
- Official repo for "Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models"☆3,335May 4, 2024Updated last year
- Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and …☆17,450Sep 5, 2024Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆72,827Updated this week
- Open-Sora: Democratizing Efficient Video Production for All☆28,658Apr 30, 2025Updated 10 months ago
- EVA Series: Visual Representation Fantasies from BAAI☆2,647Aug 1, 2024Updated last year
- Fast and memory-efficient exact attention☆22,719Updated this week
- DeepSeek-VL: Towards Real-World Vision-Language Understanding☆4,077Apr 24, 2024Updated last year
- Official release of InternLM series (InternLM, InternLM2, InternLM2.5, InternLM3).☆7,165Oct 30, 2025Updated 4 months ago
- PyTorch code and models for the DINOv2 self-supervised learning method.☆12,505Updated this week
- CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image☆32,781Feb 18, 2026Updated 3 weeks ago
- The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained mode…☆18,610Mar 5, 2026Updated last week
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,303Jul 15, 2025Updated 7 months ago
- 【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection☆3,458Dec 3, 2024Updated last year
- This project aim to reproduce Sora (Open AI T2V model), we wish the open source community contribute to this project.☆12,135Oct 29, 2025Updated 4 months ago
- Next-Token Prediction is All You Need☆2,367Jan 12, 2026Updated 2 months ago
- [CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.☆3,333Jan 18, 2025Updated last year