InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (支持DragGAN、ChatGPT、ImageBind、SAM的在线Demo系统)
☆3,217Aug 20, 2024Updated last year
Alternatives and similar repositories for InternGPT
Users that are interested in InternGPT are comparing it to the libraries listed below
Sorting:
- Unofficial Implementation of DragGAN - "Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold" (DragGAN 全功…☆4,970Jul 17, 2023Updated 2 years ago
- [CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.☆3,334Jan 18, 2025Updated last year
- VisionLLM Series☆1,138Feb 27, 2025Updated last year
- ImageBind One Embedding Space to Bind Them All☆8,980Nov 21, 2025Updated 3 months ago
- Implementation of DragGAN: Interactive Point-based Manipulation on the Generative Image Manifold☆2,145Jul 11, 2023Updated 2 years ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,167Nov 18, 2024Updated last year
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,478Aug 12, 2024Updated last year
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,936Mar 14, 2024Updated last year
- Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)☆25,760Sep 2, 2024Updated last year
- Multimodal-GPT☆1,518Jun 4, 2023Updated 2 years ago
- Official Code for DragGAN (SIGGRAPH 2023)☆35,972May 18, 2024Updated last year
- Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and …☆17,409Sep 5, 2024Updated last year
- [NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"☆4,772Aug 19, 2024Updated last year
- VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models☆5,032Jan 9, 2026Updated last month
- mPLUG-Owl: The Powerful Multi-modal Large Language Model Family☆2,539Apr 2, 2025Updated 10 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,838Jun 10, 2024Updated last year
- 🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing imp…☆3,332Mar 5, 2024Updated last year
- Painter & SegGPT Series: Vision Foundation Models from BAAI☆2,592Dec 6, 2024Updated last year
- Emu Series: Generative Multimodal Models from BAAI☆1,765Jan 12, 2026Updated last month
- Open-source and strong foundation image recognition models.☆3,591Feb 18, 2025Updated last year
- FlagAI (Fast LArge-scale General AI models) is a fast, easy-to-use and extensible toolkit for large-scale model.☆3,881Nov 11, 2025Updated 3 months ago
- An open-source framework for training large multimodal models.☆4,068Aug 31, 2024Updated last year
- AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head☆10,207Jul 6, 2024Updated last year
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,414Jun 2, 2025Updated 8 months ago
- Fast Segment Anything☆8,268Jul 30, 2024Updated last year
- Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI…