mbzuai-oryx / groundingLMM
[CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses that are seamlessly integrated with object segmentation masks.
β863Updated 4 months ago
Alternatives and similar repositories for groundingLMM:
Users that are interested in groundingLMM are comparing it to the libraries listed below
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of β¦β479Updated 8 months ago
- GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interestβ525Updated 10 months ago
- VisionLLM Seriesβ1,041Updated last month
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Wantβ802Updated 8 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsβ737Updated last year
- When do we not need larger vision models?β386Updated 2 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Promptsβ318Updated 9 months ago
- β775Updated 9 months ago
- LLaVA-Interactive-Demoβ368Updated 8 months ago
- Recent LLM-based CV and related works. Welcome to comment/contribute!β863Updated last month
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imagβ¦β514Updated 11 months ago
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Modelsβ257Updated last year
- Official implementation of SEED-LLaMA (ICLR 2024).β609Updated 6 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"β783Updated 8 months ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformerβ373Updated 2 weeks ago
- γICLR 2024π₯γ Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignmentβ803Updated last year
- [CVPR 2024] Official implementation of the paper "Visual In-context Learning"β460Updated last year
- β607Updated last year
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.β337Updated 3 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.β505Updated 3 weeks ago
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understandingβ607Updated 2 months ago
- β507Updated 5 months ago
- A Framework of Small-scale Large Multimodal Modelsβ796Updated 3 weeks ago
- Grounded Segment Anything: From Objects to Partsβ407Updated last year
- β324Updated last year
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understandingβ358Updated 4 months ago
- Official PyTorch implementation of ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models [CVPR 2023 Highlight]β892Updated 9 months ago
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"β520Updated last year
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"β585Updated last year
- Experiment on combining CLIP with SAM to do open-vocabulary image segmentation.β366Updated 2 years ago