VisionLLM Series
☆1,139Feb 27, 2025Updated last year
Alternatives and similar repositories for VisionLLM
Users that are interested in VisionLLM are comparing it to the libraries listed below
Sorting:
- InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBin…☆3,215Aug 20, 2024Updated last year
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"☆2,604Feb 16, 2025Updated last year
- (ECCVW 2025)GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest☆551Jun 3, 2025Updated 9 months ago
- Recent LLM-based CV and related works. Welcome to comment/contribute!☆873Mar 8, 2025Updated last year
- [CVPR 2023] Official Implementation of X-Decoder for generalized decoding for pixel, image and language☆1,341Oct 5, 2023Updated 2 years ago
- ☆806Jul 8, 2024Updated last year
- ☆4,607Sep 14, 2025Updated 6 months ago
- EVA Series: Visual Representation Fantasies from BAAI☆2,652Aug 1, 2024Updated last year
- Grounded Language-Image Pre-training☆2,580Jan 24, 2024Updated 2 years ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,990Nov 7, 2025Updated 4 months ago
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆506Aug 9, 2024Updated last year
- ☆788Aug 7, 2024Updated last year
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆9,904Sep 22, 2025Updated 5 months ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,189Nov 18, 2024Updated last year
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,578Aug 12, 2024Updated last year
- Painter & SegGPT Series: Vision Foundation Models from BAAI☆2,592Dec 6, 2024Updated last year
- SVIT: Scaling up Visual Instruction Tuning☆166Jun 20, 2024Updated last year
- [NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"☆4,773Aug 19, 2024Updated last year
- Official implementation of SEED-LLaMA (ICLR 2024).☆642Sep 21, 2024Updated last year
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆949Aug 5, 2025Updated 7 months ago
- Emu Series: Generative Multimodal Models from BAAI☆1,772Jan 12, 2026Updated 2 months ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆605Oct 6, 2024Updated last year
- Latest Advances on Multimodal Large Language Models☆17,466Mar 12, 2026Updated last week
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,941Aug 15, 2024Updated last year
- Official implementation for the paper "Prompt Pre-Training with Over Twenty-Thousand Classes for Open-Vocabulary Visual Recognition"☆259May 3, 2024Updated last year
- [CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.☆3,335Jan 18, 2025Updated last year
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆368Jul 24, 2025Updated 7 months ago
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆579Oct 20, 2024Updated last year
- [CVPR 2023 Highlight] InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions☆2,802Mar 25, 2025Updated 11 months ago
- Multimodal-GPT☆1,517Jun 4, 2023Updated 2 years ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,931Mar 14, 2024Updated 2 years ago
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆209Jan 8, 2025Updated last year
- An open-source framework for training large multimodal models.☆4,076Aug 31, 2024Updated last year
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆2,219Dec 15, 2025Updated 3 months ago
- PyTorch implementation of InstructDiffusion, a unifying and generic framework for aligning computer vision tasks with human instructions.☆443May 14, 2024Updated last year
- Official Repo For OMG-LLaVA and OMG-Seg codebase [CVPR-24 and NeurIPS-24]☆1,344Oct 15, 2025Updated 5 months ago
- An open source implementation of CLIP.☆13,528Mar 12, 2026Updated last week
- Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)☆939Nov 7, 2023Updated 2 years ago
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,824Nov 27, 2025Updated 3 months ago