FoundationVision / GromaLinks
[ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization
☆582Updated last year
Alternatives and similar repositories for Groma
Users that are interested in Groma are comparing it to the libraries listed below
Sorting:
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM", IJCV2025☆271Updated 6 months ago
- [CVPR 2024] Aligning and Prompting Everything All at Once for Universal Visual Perception☆601Updated last year
- An open-source implementation for training LLaVA-NeXT.☆428Updated last year
- The code for PixelRefer & VideoRefer☆330Updated 3 weeks ago
- [ICLR 2025] MLLM for On-Demand Spatial-Temporal Understanding at Arbitrary Resolution☆329Updated 5 months ago
- ☆401Updated 11 months ago
- [ECCV 2024] The official code of paper "Open-Vocabulary SAM".☆1,021Updated 4 months ago
- [ICCV 2025] SAM2Long: Enhancing SAM 2 for Long Video Segmentation with a Training-Free Memory Tree☆538Updated 4 months ago
- (AAAI 2024) BLIVA: A Simple Multimodal LLM for Better Handling of Text-rich Visual Questions☆259Updated last year
- Eagle: Frontier Vision-Language Models with Data-Centric Strategies☆905Updated last month
- Code release for "UniVS: Unified and Universal Video Segmentation with Prompts as Queries" (CVPR2024)☆198Updated last year
- Code for AAAl 2024 paper: Relax Image-Specific Prompt Requirement in SAM: A Single Generic Prompt for Segmenting Camouflaged Objects☆159Updated 9 months ago
- Official Repo For "Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos"☆1,450Updated this week
- [AAAI 2026] ✨ TSPO: Temporal Sampling Policy Optimization for Long-form Video Language Understanding☆106Updated 3 weeks ago
- [ICML 2025] Official repository for paper "Scaling Video-Language Models to 10K Frames via Hierarchical Differential Distillation"☆186Updated 2 months ago
- [NeurIPS 2025] Efficient Reasoning Vision Language Models☆425Updated 2 months ago
- Official Repository of OmniCaptioner☆167Updated 7 months ago
- ✨✨Long-VITA: Scaling Large Multi-modal Models to 1 Million Tokens with Leading Short-Context Accuracy☆307Updated 6 months ago
- [NeurIPS 2024] An official implementation of "ShareGPT4Video: Improving Video Understanding and Generation with Better Captions"☆1,079Updated last year
- This project is the official implementation of 'LLMGA: Multimodal Large Language Model based Generation Assistant', ECCV2024 Oral☆396Updated 6 months ago
- Large-Scale Visual Representation Model☆699Updated 2 months ago
- A collection of multimodal reasoning papers, codes, datasets, benchmarks and resources.☆342Updated 2 weeks ago
- 【CVPR'2023 Highlight & TPAMI】Cap4Video: What Can Auxiliary Captions Do for Text-Video Retrieval?☆249Updated last year
- Official Repo For OMG-LLaVA and OMG-Seg codebase [CVPR-24 and NeurIPS-24]☆1,334Updated last month
- GPT4Vis: What Can GPT-4 Do for Zero-shot Visual Recognition?☆187Updated last year
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Models☆152Updated last year
- Ola: Pushing the Frontiers of Omni-Modal Language Model☆380Updated 5 months ago
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-language Models☆99Updated last year
- 【AAAI'2023 & IJCV】Transferring Vision-Language Models for Visual Recognition: A Classifier Perspective☆197Updated last year
- [ICCV 2023] Spectrum-guided Multi-granularity Referring Video Object Segmentation.☆110Updated 8 months ago