lxtGH / DenseWorld-1MLinks
Code and dataset link for "DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World"
☆106Updated 2 months ago
Alternatives and similar repositories for DenseWorld-1M
Users that are interested in DenseWorld-1M are comparing it to the libraries listed below
Sorting:
- [ICLR'25] Reconstructive Visual Instruction Tuning☆114Updated 5 months ago
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆79Updated 10 months ago
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆153Updated 9 months ago
- ☆38Updated 2 months ago
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆81Updated last month
- [ECCV2024] PartGLEE: A Foundation Model for Recognizing and Parsing Any Objects☆51Updated 11 months ago
- [NeurlPS 2024] One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos☆134Updated 8 months ago
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆126Updated last month
- [ICCV 2025] GroundingSuite: Measuring Complex Multi-Granular Pixel Grounding☆66Updated 2 months ago
- [CVPR 25] A framework named B^2-DiffuRL for RL-based diffusion model fine-tuning.☆37Updated 5 months ago
- This repo contains the code for our paper Towards Open-Ended Visual Recognition with Large Language Model☆97Updated last year
- ☆33Updated 11 months ago
- [ICLR 2025] IDA-VLM: Towards Movie Understanding via ID-Aware Large Vision-Language Model☆36Updated 9 months ago
- ☆58Updated 2 years ago
- ☆114Updated 5 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆111Updated last month
- Official Repository of paper: Envisioning Beyond the Pixels: Benchmarking Reasoning-Informed Visual Editing☆89Updated this week
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆86Updated 6 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆87Updated 3 months ago
- ReNeg: Learning Negative Embedding with Reward Guidance☆34Updated 8 months ago
- ICML2025☆57Updated 2 weeks ago
- Official respository for ReasonGen-R1☆68Updated 2 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆58Updated last month
- 🔥 [CVPR 2024] Official implementation of "See, Say, and Segment: Teaching LMMs to Overcome False Premises (SESAME)"☆43Updated last year
- ☆88Updated 2 months ago
- ☆118Updated last year
- ☆21Updated 7 months ago
- Uni-CoT: Towards Unified Chain-of-Thought Reasoning Across Text and Vision☆104Updated last month
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆79Updated 2 months ago
- Empowering Unified MLLM with Multi-granular Visual Generation☆130Updated 7 months ago