mbzuai-oryx / groundingLMM
[CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses that are seamlessly integrated with object segmentation masks.
β852Updated 4 months ago
Alternatives and similar repositories for groundingLMM:
Users that are interested in groundingLMM are comparing it to the libraries listed below
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of β¦β477Updated 7 months ago
- GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interestβ524Updated 9 months ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Wantβ792Updated 7 months ago
- VisionLLM Seriesβ1,028Updated 3 weeks ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"β947Updated last week
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"β771Updated 7 months ago
- Recent LLM-based CV and related works. Welcome to comment/contribute!β859Updated 2 weeks ago
- When do we not need larger vision models?β380Updated last month
- β502Updated 4 months ago
- [ECCV 2024] Tokenize Anything via Promptingβ571Updated 3 months ago
- CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasksβ398Updated 3 weeks ago
- [ICCV 2023] Official implementation of the paper "A Simple Framework for Open-Vocabulary Segmentation and Detection"β697Updated last year
- β772Updated 8 months ago
- [NeurIPS2023] Code release for "Hierarchical Open-vocabulary Universal Image Segmentation"β283Updated last year
- LLaVA-Interactive-Demoβ366Updated 7 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Promptsβ315Updated 8 months ago
- Official PyTorch implementation of ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models [CVPR 2023 Highlight]β888Updated 8 months ago
- β602Updated last year
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understandingβ600Updated last month
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Modelsβ193Updated 2 months ago
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Modelsβ256Updated last year
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformerβ369Updated this week
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"β574Updated last year
- [CVPR 2024] Official implementation of the paper "Visual In-context Learning"β445Updated 11 months ago
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imagβ¦β506Updated 11 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsβ733Updated last year
- β319Updated last year
- γICLR 2024π₯γ Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignmentβ795Updated 11 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.β332Updated 2 months ago
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Seriesβ914Updated 2 months ago