alibaba-damo-academy / PixelReferLinks
The code for PixelRefer & VideoRefer
☆337Updated last month
Alternatives and similar repositories for PixelRefer
Users that are interested in PixelRefer are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] MLLM for On-Demand Spatial-Temporal Understanding at Arbitrary Resolution☆331Updated 6 months ago
- [ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization☆582Updated last year
- [ICCV 2025] SAM2Long: Enhancing SAM 2 for Long Video Segmentation with a Training-Free Memory Tree☆543Updated 5 months ago
- [ICML 2025 Oral] An official implementation of VideoRoPE & VideoRoPE++☆212Updated 5 months ago
- [NeurIPS 2025] Efficient Reasoning Vision Language Models☆444Updated 3 months ago
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM", IJCV2025☆274Updated 7 months ago
- Official Repository of OmniCaptioner☆168Updated 8 months ago
- [ICML 2025] Official repository for paper "Scaling Video-Language Models to 10K Frames via Hierarchical Differential Distillation"☆187Updated 3 months ago
- GPT-ImgEval: Evaluating GPT-4o’s state-of-the-art image generation capabilities☆305Updated 8 months ago
- Code release for "UniVS: Unified and Universal Video Segmentation with Prompts as Queries" (CVPR2024)☆199Updated last year
- [NeurIPS 2025] T2I-R1: Reinforcing Image Generation with Collaborative Semantic-level and Token-level CoT☆424Updated 3 months ago
- (Accepted by IJCV) Liquid: Language Models are Scalable and Unified Multi-modal Generators☆636Updated last month
- [AAAI 2026] ✨ TSPO: Temporal Sampling Policy Optimization for Long-form Video Language Understanding☆109Updated last month
- ☆245Updated last year
- (ICCV 2025) Enhance CLIP and MLLM's fine-grained visual representations with generative models.☆76Updated 6 months ago
- Are Video Models Ready as Zero-shot Reasoners?☆84Updated last month
- An open-source implementation for training LLaVA-NeXT.☆430Updated last year
- Official Repo For "Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos"☆1,483Updated 3 weeks ago
- a family of versatile and state-of-the-art video tokenizers.☆429Updated 4 months ago
- Official implementation of X-Prompt: Towards Universal In-Context Image Generation in Auto-Regressive Vision Language Foundation Models☆158Updated last year
- ✨✨Long-VITA: Scaling Large Multi-modal Models to 1 Million Tokens with Leading Short-Context Accuracy☆307Updated 7 months ago
- [NeurIPS 2024 D&B Spotlight🔥] ChronoMagic-Bench: A Benchmark for Metamorphic Evaluation of Text-to-Time-lapse Video Generation☆211Updated 7 months ago
- ✨✨R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning☆273Updated 7 months ago
- Ola: Pushing the Frontiers of Omni-Modal Language Model☆382Updated 6 months ago
- This project is the official implementation of 'LLMGA: Multimodal Large Language Model based Generation Assistant', ECCV2024 Oral☆396Updated 7 months ago
- NEO Series: Native Vision-Language Models from First Principles☆600Updated 3 weeks ago
- [AAAI26] Next Patch Prediction☆132Updated last year
- ☆279Updated 5 months ago
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-language Models☆100Updated last year
- [CVPR 2025] The First Investigation of CoT Reasoning (RL, TTS, Reflection) in Image Generation☆845Updated 7 months ago