mbzuai-oryx / groundingLMM
[CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses that are seamlessly integrated with object segmentation masks.
β867Updated 5 months ago
Alternatives and similar repositories for groundingLMM:
Users that are interested in groundingLMM are comparing it to the libraries listed below
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of β¦β483Updated 8 months ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Wantβ812Updated 9 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Promptsβ319Updated 9 months ago
- VisionLLM Seriesβ1,054Updated 2 months ago
- β778Updated 9 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"β800Updated 8 months ago
- Recent LLM-based CV and related works. Welcome to comment/contribute!β861Updated last month
- GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interestβ527Updated 10 months ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformerβ376Updated 2 weeks ago
- When do we not need larger vision models?β391Updated 2 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsβ739Updated last year
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Modelsβ256Updated last year
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imagβ¦β519Updated last year
- β611Updated last year
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understandingβ363Updated 5 months ago
- LLaVA-Interactive-Demoβ369Updated 9 months ago
- β515Updated 5 months ago
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understandingβ614Updated 3 months ago
- [CVPR 2024] OneLLM: One Framework to Align All Modalities with Languageβ639Updated 6 months ago
- [CVPR 2024] Official implementation of the paper "Visual In-context Learning"β466Updated last year
- [ECCV 2024] Tokenize Anything via Promptingβ583Updated 4 months ago
- [ICCV 2023] Official implementation of the paper "A Simple Framework for Open-Vocabulary Segmentation and Detection"β708Updated last year
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.β508Updated last month
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Modelsβ198Updated 3 months ago
- β328Updated last year
- (TPAMI 2024) A Survey on Open Vocabulary Learningβ925Updated last month
- A Framework of Small-scale Large Multimodal Modelsβ808Updated last week
- γICLR 2024π₯γ Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignmentβ804Updated last year
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"β2,185Updated 2 months ago
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Seriesβ941Updated 3 months ago