Shengcao-Cao / groundLMMLinks
Emergent Visual Grounding in Large Multimodal Models Without Grounding Supervision
☆40Updated this week
Alternatives and similar repositories for groundLMM
Users that are interested in groundLMM are comparing it to the libraries listed below
Sorting:
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆103Updated 4 months ago
- Repository for the paper: Teaching VLMs to Localize Specific Objects from In-context Examples☆31Updated 10 months ago
- [ICML2024] Repo for the paper `Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models'☆21Updated 9 months ago
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆79Updated last year
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆29Updated last year
- 🔥 [CVPR 2024] Official implementation of "See, Say, and Segment: Teaching LMMs to Overcome False Premises (SESAME)"☆44Updated last year
- Visual self-questioning for large vision-language assistant.☆45Updated 3 months ago
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understanding☆45Updated 9 months ago
- [EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"☆20Updated last year
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆84Updated last year
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding