DAMO-NLP-SG / PixelReferLinks
[CVPR 2025] The code for "VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM"
☆270Updated last month
Alternatives and similar repositories for PixelRefer
Users that are interested in PixelRefer are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] MLLM for On-Demand Spatial-Temporal Understanding at Arbitrary Resolution☆327Updated 3 months ago
- [ICCV 2025] SAM2Long: Enhancing SAM 2 for Long Video Segmentation with a Training-Free Memory Tree☆522Updated 2 months ago
- [ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization☆577Updated last year
- [NeurIPS 2025] Efficient Reasoning Vision Language Models☆405Updated last month
- [NeurIPS 2025] T2I-R1: Reinforcing Image Generation with Collaborative Semantic-level and Token-level CoT☆404Updated last month
- [ICML 2025 Oral] An official implementation of VideoRoPE & VideoRoPE++☆199Updated 2 months ago
- GPT-ImgEval: Evaluating GPT-4o’s state-of-the-art image generation capabilities☆301Updated 5 months ago
- ☆241Updated 10 months ago
- Code release for "UniVS: Unified and Universal Video Segmentation with Prompts as Queries" (CVPR2024)☆192Updated 10 months ago
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM", IJCV2025☆269Updated 4 months ago
- [ICML 2025] Official repository for paper "Scaling Video-Language Models to 10K Frames via Hierarchical Differential Distillation"☆177Updated 3 weeks ago
- Official implementation of X-Prompt: Towards Universal In-Context Image Generation in Auto-Regressive Vision Language Foundation Models☆158Updated 10 months ago
- ☆266Updated 2 months ago
- UniVG-R1: Reasoning Guided Universal Visual Grounding with Reinforcement Learning☆149Updated 4 months ago
- Liquid: Language Models are Scalable and Unified Multi-modal Generators☆620Updated 6 months ago
- [NeurIPS 2025 D&B🔥] OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation☆163Updated 2 weeks ago
- ✨✨Long-VITA: Scaling Large Multi-modal Models to 1 Million Tokens with Leading Short-Context Accuracy☆300Updated 5 months ago
- 🔥 Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos☆1,292Updated last month
- ☆131Updated 9 months ago
- a family of versatile and state-of-the-art video tokenizers.☆414Updated last month
- (ICCV 2025) Enhance CLIP and MLLM's fine-grained visual representations with generative models.☆73Updated 3 months ago
- Official Repository of OmniCaptioner☆162Updated 5 months ago
- An open-source implementation for training LLaVA-NeXT.☆422Updated 11 months ago
- SaRA: High-Efficient Diffusion Model Fine-tuning with Progressive Sparse Low-Rank Adaptation☆116Updated last year
- [NeurIPS 2024 D&B Spotlight🔥] ChronoMagic-Bench: A Benchmark for Metamorphic Evaluation of Text-to-Time-lapse Video Generation☆208Updated 4 months ago
- Ola: Pushing the Frontiers of Omni-Modal Language Model☆370Updated 4 months ago
- Less is Enough: Training-Free Video Diffusion Acceleration via Runtime-Adaptive Caching☆254Updated last month
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-language Models☆96Updated last year
- LDGen: Enhancing Text-to-Image Synthesis via Large Language Model-Driven Language Representation☆37Updated 7 months ago
- Code for AAAl 2024 paper: Relax Image-Specific Prompt Requirement in SAM: A Single Generic Prompt for Segmenting Camouflaged Objects☆155Updated 7 months ago