jefferyZhan / Griffon
Official repo of Griffon series including v1(ECCV 2024), v2, and G
☆149Updated this week
Alternatives and similar repositories for Griffon:
Users that are interested in Griffon are comparing it to the libraries listed below
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆90Updated 2 months ago
- ☆113Updated 8 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆149Updated 6 months ago
- [CVPR2024] Generative Region-Language Pretraining for Open-Ended Object Detection☆166Updated this week
- 【NeurIPS 2024】Dense Connector for MLLMs☆157Updated 5 months ago
- Code for ChatRex: Taming Multimodal LLM for Joint Perception and Understanding☆173Updated 2 months ago
- (NeurIPS2023) CoDet: Co-Occurrence Guided Region-Word Alignment for Open-Vocabulary Object Detection☆116Updated 11 months ago
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆202Updated 9 months ago
- ☆105Updated 9 months ago
- SVIT: Scaling up Visual Instruction Tuning☆163Updated 9 months ago
- [CVPR 2025] DynRefer: Delving into Region-level Multimodal Tasks via Dynamic Resolution☆45Updated 3 weeks ago
- ☆83Updated last year
- A detection/segmentation dataset with labels characterized by intricate and flexible expressions. "Described Object Detection: Liberating…☆115Updated last year
- ☆133Updated last year
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆154Updated 6 months ago
- Official repo for our ICML 23 paper: "Multi-Modal Classifiers for Open-Vocabulary Object Detection"☆89Updated last year
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆73Updated 5 months ago
- The official implementation of RAR☆84Updated last year
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆137Updated 3 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆68Updated 2 months ago
- [ICLR2024 Spotlight] Code Release of CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction☆183Updated last year
- [CVPR 2024] PixelLM is an effective and efficient LMM for pixel-level reasoning and understanding.☆216Updated last month
- GroundVLP: Harnessing Zero-shot Visual Grounding from Vision-Language Pre-training and Open-Vocabulary Object Detection (AAAI 2024)☆64Updated last year
- [NeurIPS 2024] Classification Done Right for Vision-Language Pre-Training☆204Updated last week
- Official implementation of 🛸 "UFO: A Unified Approach to Fine-grained Visual Perception via Open-ended Language Interface"☆115Updated last week
- Evaluation code for Ref-L4, a new REC benchmark in the LMM era☆29Updated 3 months ago
- [ICLR 2025] Diffusion Feedback Helps CLIP See Better☆270Updated 2 months ago
- [AAAI2025] ChatterBox: Multi-round Multimodal Referring and Grounding, Multimodal, Multi-round dialogues☆53Updated 3 months ago
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆120Updated 2 months ago
- Official implementation of the Law of Vision Representation in MLLMs☆151Updated 4 months ago