IDEA-Research / ChatRexLinks
Code for ChatRex: Taming Multimodal LLM for Joint Perception and Understanding
☆199Updated 7 months ago
Alternatives and similar repositories for ChatRex
Users that are interested in ChatRex are comparing it to the libraries listed below
Sorting:
- Official repo of Griffon series including v1(ECCV 2024), v2(ICCV 2025), G, and R, and also the RL tool Vision-R1.☆231Updated 2 weeks ago
- [CVPR 2024] PixelLM is an effective and efficient LMM for pixel-level reasoning and understanding.☆234Updated 6 months ago
- Vision Manus: Your versatile Visual AI assistant☆253Updated 3 weeks ago
- [CVPR2024] Generative Region-Language Pretraining for Open-Ended Object Detection☆177Updated 5 months ago
- Rex-Thinker: Grounded Object Refering via Chain-of-Thought Reasoning☆112Updated last month
- [ECCV2024] This is an official implementation for "PSALM: Pixelwise SegmentAtion with Large Multi-Modal Model"☆249Updated 7 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆165Updated 11 months ago
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆156Updated 11 months ago
- [CVPR2025] Project for "HyperSeg: Towards Universal Visual Segmentation with Large Language Model".☆164Updated 8 months ago
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆208Updated last year
- Official implementation of 🛸 "UFO: A Unified Approach to Fine-grained Visual Perception via Open-ended Language Interface"☆215Updated 2 months ago
- [CVPR 2025] DynRefer: Delving into Region-level Multimodal Tasks via Dynamic Resolution☆51Updated 5 months ago
- (NeurIPS2023) CoDet: Co-Occurrence Guided Region-Word Alignment for Open-Vocabulary Object Detection☆117Updated last year
- [NeurIPS 2024] Classification Done Right for Vision-Language Pre-Training☆213Updated 5 months ago
- [ECCV 2024] Elysium: Exploring Object-level Perception in Videos via MLLM☆81Updated 10 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆85Updated 2 months ago
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆162Updated 8 months ago
- Official code implementation of Perception R1: Pioneering Perception Policy with Reinforcement Learning☆248Updated last month
- ☆86Updated last year
- New generation of CLIP with fine grained discrimination capability, ICML2025☆277Updated last month
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆203Updated 7 months ago
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆239Updated last year
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆151Updated 8 months ago
- LinVT: Empower Your Image-level Large Language Model to Understand Videos☆82Updated 7 months ago
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆121Updated 3 weeks ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆330Updated last year
- Recognize Any Regions☆122Updated 8 months ago
- ☆189Updated 3 months ago
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆91Updated 7 months ago
- The code of the paper "NExT-Chat: An LMM for Chat, Detection and Segmentation".☆249Updated last year