[arXiv 2023] Set-of-Mark Prompting for GPT-4V and LMMs
☆1,517Aug 19, 2024Updated last year
Alternatives and similar repositories for SoM
Users that are interested in SoM are comparing it to the libraries listed below
Sorting:
- AI agent using GPT-4V(ision) capable of using a mouse/keyboard to interact with web UI☆1,066Dec 9, 2024Updated last year
- [COLM-2024] List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs☆145Aug 23, 2024Updated last year
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆945Aug 5, 2025Updated 6 months ago
- [NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"☆4,772Aug 19, 2024Updated last year
- Unofficial implementation and experiments related to Set-of-Mark (SoM) 👁️☆88Oct 20, 2023Updated 2 years ago
- [ECCV 2024] Official implementation of the paper "Semantic-SAM: Segment and Recognize Anything at Any Granularity"☆2,808Jul 10, 2025Updated 7 months ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,478Aug 12, 2024Updated last year
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions☆2,921May 26, 2025Updated 9 months ago
- a state-of-the-art-level open visual language model | 多模态预训练模型☆6,723May 29, 2024Updated last year
- [CVPR 2023] Official Implementation of X-Decoder for generalized decoding for pixel, image and language☆1,343Oct 5, 2023Updated 2 years ago
- (ECCVW 2025)GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest☆551Jun 3, 2025Updated 8 months ago
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"☆2,589Feb 16, 2025Updated last year
- [ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"☆9,760Aug 12, 2024Updated last year
- [ICCV2023] VLPart: Going Denser with Open-Vocabulary Part Segmentation☆393Sep 19, 2023Updated 2 years ago
- [ICCV 2023] Official implementation of the paper "A Simple Framework for Open-Vocabulary Segmentation and Detection"☆748Jan 22, 2024Updated 2 years ago
- ☆4,577Sep 14, 2025Updated 5 months ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,986Nov 7, 2025Updated 3 months ago
- Emu Series: Generative Multimodal Models from BAAI☆1,765Jan 12, 2026Updated last month
- An open-source framework for training large multimodal models.☆4,068Aug 31, 2024Updated last year
- Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and …☆17,409Sep 5, 2024Updated last year
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,167Nov 18, 2024Updated last year
- Grounded Language-Image Pre-training☆2,572Jan 24, 2024Updated 2 years ago
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆505Aug 9, 2024Updated last year
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆763Feb 1, 2024Updated 2 years ago
- [CVPR 2024] Official implementation of the paper "Visual In-context Learning"☆529Apr 8, 2024Updated last year
- GPT-4V in Wonderland: LMMs as Smartphone Agents☆135Jul 17, 2024Updated last year
- The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.☆6,538Aug 7, 2024Updated last year
- An Open-source Toolkit for LLM Development☆2,805Jan 13, 2025Updated last year
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,811Nov 27, 2025Updated 3 months ago
- Painter & SegGPT Series: Vision Foundation Models from BAAI☆2,592Dec 6, 2024Updated last year
- 【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection☆3,450Dec 3, 2024Updated last year
- ☆643Feb 15, 2024Updated 2 years ago
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆690Jan 7, 2024Updated 2 years ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆336Jul 17, 2024Updated last year
- LLaVA-Interactive-Demo☆380Jul 25, 2024Updated last year
- ☆8,685Oct 9, 2024Updated last year
- PyTorch code and models for the DINOv2 self-supervised learning method.☆12,427Updated this week
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,936Mar 14, 2024Updated last year
- Latest Advances on Multimodal Large Language Models☆17,355Feb 23, 2026Updated last week