FoundationVision / GromaLinks
[ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization
☆577Updated last year
Alternatives and similar repositories for Groma
Users that are interested in Groma are comparing it to the libraries listed below
Sorting:
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM", IJCV2025☆263Updated 2 months ago
- [CVPR 2024] Aligning and Prompting Everything All at Once for Universal Visual Perception☆578Updated last year
- An open-source implementation for training LLaVA-NeXT.☆413Updated 9 months ago
- [ICLR 2025] MLLM for On-Demand Spatial-Temporal Understanding at Arbitrary Resolution☆318Updated last month
- [CVPR 2025] The code for "VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM"☆250Updated last month
- (AAAI 2024) BLIVA: A Simple Multimodal LLM for Better Handling of Text-rich Visual Questions☆261Updated last year
- [ICCV 2025] SAM2Long: Enhancing SAM 2 for Long Video Segmentation with a Training-Free Memory Tree☆500Updated last week
- [ECCV 2024] The official code of paper "Open-Vocabulary SAM".