microsoft / SoMLinks
[arXiv 2023] Set-of-Mark Prompting for GPT-4V and LMMs
☆1,436Updated 11 months ago
Alternatives and similar repositories for SoM
Users that are interested in SoM are comparing it to the libraries listed below
Sorting:
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆755Updated last year
- Official repo for MM-REACT☆954Updated last year
- AI agent using GPT-4V(ision) capable of using a mouse/keyboard to interact with web UI☆1,048Updated 8 months ago
- [ICML'24] SeeAct is a system for generalist web agents that autonomously carry out tasks on any given website, with a focus on large mult…☆768Updated 6 months ago
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆656Updated last year
- [NeurIPS'23 Spotlight] "Mind2Web: Towards a Generalist Agent for the Web" -- the first LLM-based web agent and benchmark for generalist w…☆853Updated 4 months ago
- VisionLLM Series☆1,094Updated 5 months ago
- ☆780Updated last year
- LLaVA-Interactive-Demo☆376Updated last year
- Implementation of the ScreenAI model from the paper: "A Vision-Language Model for UI and Infographics Understanding"☆355Updated this week
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆829Updated last year
- Emu Series: Generative Multimodal Models from BAAI☆1,741Updated 10 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆903Updated 2 months ago
- A family of lightweight multimodal models.☆1,024Updated 8 months ago
- 【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection☆3,330Updated 8 months ago
- [ACL 2024 🔥] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the cap…☆1,412Updated 4 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,624Updated this week
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,205Updated 3 weeks ago
- 🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)☆839Updated this week
- ☆788Updated last year
- Strong and Open Vision Language Assistant for Mobile Devices☆1,250Updated last year
- [CVPR 2025] Open-source, End-to-end, Vision-Language-Action model for GUI Agent & Computer Use.☆1,409Updated 2 months ago
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"☆2,335Updated 5 months ago
- Code repo for "WebArena: A Realistic Web Environment for Building Autonomous Agents"☆1,084Updated 6 months ago
- BuboGPT: Enabling Visual Grounding in Multi-Modal LLMs☆511Updated 2 years ago
- GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest☆539Updated 2 months ago
- Code for the Molmo Vision-Language Model☆610Updated 7 months ago
- Recent LLM-based CV and related works. Welcome to comment/contribute!☆870Updated 5 months ago
- GPT4Tools is an intelligent system that can automatically decide, control, and utilize different visual foundation models, allowing the u…☆774Updated last year
- The model, data and code for the visual GUI Agent SeeClick☆411Updated 3 weeks ago