microsoft / SoMLinks
[arXiv 2023] Set-of-Mark Prompting for GPT-4V and LMMs
☆1,494Updated last year
Alternatives and similar repositories for SoM
Users that are interested in SoM are comparing it to the libraries listed below
Sorting:
- AI agent using GPT-4V(ision) capable of using a mouse/keyboard to interact with web UI☆1,060Updated last year
- Official repo for MM-REACT☆962Updated last year
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆762Updated last year
- [NeurIPS'23 Spotlight] "Mind2Web: Towards a Generalist Agent for the Web" -- the first LLM-based web agent and benchmark for generalist w…☆920Updated last month
- [ICML'24] SeeAct is a system for generalist web agents that autonomously carry out tasks on any given website, with a focus on large mult…☆806Updated 10 months ago
- ☆789Updated last year
- VisionLLM Series☆1,130Updated 9 months ago
- LLaVA-Interactive-Demo☆379Updated last year
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆682Updated last year
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆851Updated last year
- Multimodal-GPT☆1,517Updated 2 years ago
- [CVPR 2025] Open-source, End-to-end, Vision-Language-Action model for GUI Agent & Computer Use.☆1,582Updated 6 months ago
- Strong and Open Vision Language Assistant for Mobile Devices☆1,312Updated last year
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆929Updated 4 months ago
- ☆799Updated last year
- 【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection☆3,413Updated last year
- Implementation of the ScreenAI model from the paper: "A Vision-Language Model for UI and Infographics Understanding"☆369Updated last month
- GPT4Tools is an intelligent system that can automatically decide, control, and utilize different visual foundation models, allowing the u…☆773Updated last year
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆1,068Updated 10 months ago
- Emu Series: Generative Multimodal Models from BAAI☆1,760Updated last year
- [ACL 2024 🔥] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the cap…☆1,476Updated 4 months ago
- Code repo for "WebArena: A Realistic Web Environment for Building Autonomous Agents"☆1,243Updated 2 weeks ago
- The model, data and code for the visual GUI Agent SeeClick☆445Updated 5 months ago
- BuboGPT: Enabling Visual Grounding in Multi-Modal LLMs☆510Updated 2 years ago
- Caption-Anything is a versatile tool combining image segmentation, visual captioning, and ChatGPT, generating tailored captions with dive…☆1,770Updated 2 years ago
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"☆2,510Updated 9 months ago
- [TLLM'23] PandaGPT: One Model To Instruction-Follow Them All☆835Updated 2 years ago
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,283Updated 4 months ago
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,770Updated 2 weeks ago
- A family of lightweight multimodal models.☆1,048Updated last year