microsoft / SoM
Set-of-Mark Prompting for GPT-4V and LMMs
☆1,324Updated 7 months ago
Alternatives and similar repositories for SoM:
Users that are interested in SoM are comparing it to the libraries listed below
- [ICML'24] SeeAct is a system for generalist web agents that autonomously carry out tasks on any given website, with a focus on large mult…☆731Updated last month
- AI agent using GPT-4V(ision) capable of using a mouse/keyboard to interact with web UI☆1,032Updated 3 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆730Updated last year
- [NeurIPS'23 Spotlight] "Mind2Web: Towards a Generalist Agent for the Web"☆802Updated 7 months ago
- Code repo for "WebArena: A Realistic Web Environment for Building Autonomous Agents"☆921Updated last month
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆852Updated 3 months ago
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆574Updated last year
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆780Updated 7 months ago
- VisionLLM Series☆1,028Updated 3 weeks ago
- Mixture-of-Experts for Large Vision-Language Models☆2,125Updated 3 months ago
- ☆770Updated 7 months ago
- LLaVA-Interactive-Demo☆366Updated 7 months ago
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"☆2,085Updated last month
- Strong and Open Vision Language Assistant for Mobile Devices☆1,172Updated 11 months ago
- Official repo for MM-REACT☆944Updated last year
- 【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection☆3,196Updated 3 months ago
- BuboGPT: Enabling Visual Grounding in Multi-Modal LLMs☆508Updated last year
- 🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)☆835Updated 8 months ago
- ☆772Updated 8 months ago
- GPT4Tools is an intelligent system that can automatically decide, control, and utilize different visual foundation models, allowing the u…☆768Updated last year
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆477Updated 7 months ago
- GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest☆524Updated 9 months ago
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆913Updated 2 months ago
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆864Updated 3 months ago
- [ACL 2024 🔥] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the cap…☆1,318Updated 6 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,371Updated last week
- [ECCV2024] 🐙Octopus, an embodied vision-language model trained with RLEF, emerging superior in embodied visual planning and programming.☆284Updated 10 months ago
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding☆600Updated last month
- Emu Series: Generative Multimodal Models from BAAI☆1,695Updated 5 months ago
- Code and models for NExT-GPT: Any-to-Any Multimodal Large Language Model☆3,466Updated 4 months ago