lxtGH / DenseWorld-1MLinks
Code and dataset link for "DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World"
☆58Updated last week
Alternatives and similar repositories for DenseWorld-1M
Users that are interested in DenseWorld-1M are comparing it to the libraries listed below
Sorting:
- ☆58Updated last year
- [ICCV 2025] GroundingSuite: Measuring Complex Multi-Granular Pixel Grounding☆65Updated 2 weeks ago
- [TCSVT] state-of-the-art open vocabulary detector on COCO/LVIS/V3Det☆30Updated last month
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆29Updated 3 months ago
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆77Updated 8 months ago
- ☆31Updated 9 months ago
- [ECCV2024] ClearCLIP: Decomposing CLIP Representations for Dense Vision-Language Inference☆84Updated 3 months ago
- ☆33Updated this week
- [ICLR'25] Reconstructive Visual Instruction Tuning☆97Updated 3 months ago
- 🔥 [CVPR 2024] Official implementation of "See, Say, and Segment: Teaching LMMs to Overcome False Premises (SESAME)"☆41Updated last year
- (ICCV 2023) Betrayed by Captions: Joint Caption Grounding and Generation for Open Vocabulary Instance Segmentation☆47Updated 11 months ago
- This repo contains the code for our paper Towards Open-Ended Visual Recognition with Large Language Model☆98Updated 11 months ago
- [ICLR 2025] IDA-VLM: Towards Movie Understanding via ID-Aware Large Vision-Language Model☆31Updated 7 months ago
- WeThink: Toward General-purpose Vision-Language Reasoning via Reinforcement Learning☆28Updated last month
- Learning 1D Causal Visual Representation with De-focus Attention Networks☆35Updated last year
- ☆111Updated last year
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆48Updated 3 months ago
- ☆12Updated 7 months ago
- [CVPR 2025] DynRefer: Delving into Region-level Multimodal Tasks via Dynamic Resolution☆51Updated 4 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆82Updated last month
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆93Updated last month
- 👾 E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)☆59Updated 5 months ago
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆96Updated last month
- ☆32Updated last year
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆43Updated last month
- ☆85Updated last year
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆147Updated 7 months ago
- ☆21Updated 2 years ago
- 「AAAI 2024」 Referred by Multi-Modality: A Unified Temporal Transformers for Video Object Segmentation☆81Updated last month
- Codes for ICLR 2025 Paper: Towards Semantic Equivalence of Tokenization in Multimodal LLM☆67Updated 2 months ago