FoundationVision / Groma
[ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization
☆564Updated 5 months ago
Related projects ⓘ
Alternatives and complementary repositories for Groma
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM".☆214Updated 3 weeks ago
- An open-source implementation for training LLaVA-NeXT.☆395Updated 3 weeks ago
- (AAAI 2024) BLIVA: A Simple Multimodal LLM for Better Handling of Text-rich Visual Questions☆269Updated 7 months ago
- MLLM for On-Demand Spatial-Temporal Understanding at Arbitrary Resolution☆289Updated this week
- SAM2Long: Enhancing SAM 2 for Long Video Segmentation with a Training-Free Memory Tree☆283Updated 2 weeks ago
- ☆368Updated 6 months ago
- Mathematical Visual Instruction Tuning for Multi-modal Large Language Models☆109Updated 3 months ago
- [NeurIPS 2024]OmniTokenizer: one model and one weight for image-video joint tokenization.☆261Updated 4 months ago
- [ECCV 2024] Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?☆149Updated last month
- [NeurIPS 2024] An official implementation of ShareGPT4Video: Improving Video Understanding and Generation with Better Captions☆1,265Updated last month
- This project is the official implementation of 'LLMGA: Multimodal Large Language Model based Generation Assistant', ECCV2024 Oral☆462Updated 3 months ago
- [ECCV 2024] The official code of paper "Open-Vocabulary SAM".☆945Updated 3 months ago
- 【CVPR'2023 Highlight & TPAMI】Cap4Video: What Can Auxiliary Captions Do for Text-Video Retrieval?☆240Updated 2 months ago
- u-LLaVA: Unifying Multi-Modal Tasks via Large Language Model☆138Updated 4 months ago
- EAGLE: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders☆539Updated 2 months ago
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-language Models☆86Updated 7 months ago
- Diffusion Feedback Helps CLIP See Better☆215Updated 2 months ago
- MLCD & UNICOM : Large-Scale Visual Representation Model☆378Updated this week
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆175Updated 5 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆297Updated 4 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆140Updated last month
- 【AAAI'2023 & IJCV】Transferring Vision-Language Models for Visual Recognition: A Classifier Perspective☆205Updated 5 months ago
- LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Images☆318Updated last month
- Code for AAAl 2024 paper: Relax Image-Specific Prompt Requirement in SAM: A Single Generic Prompt for Segmenting Camouflaged Objects☆139Updated last month
- PixelLM is an effective and efficient LMM for pixel-level reasoning and understanding. PixelLM is accepted by CVPR 2024.☆181Updated 5 months ago
- GPT4Vis: What Can GPT-4 Do for Zero-shot Visual Recognition?☆207Updated 5 months ago
- LLaVA-HR: High-Resolution Large Language-Vision Assistant☆212Updated 3 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆246Updated 4 months ago
- [ECCV24] VISA: Reasoning Video Object Segmentation via Large Language Model☆132Updated 3 months ago
- [ICCV 2023] Spectrum-guided Multi-granularity Referring Video Object Segmentation.☆81Updated last month