rui-qian / READ
Rui Qian, Xin Yin, Dejing Douβ : Reasoning to Attend: Try to Understand How <SEG> Token Works
β17Updated last month
Alternatives and similar repositories for READ:
Users that are interested in READ are comparing it to the libraries listed below
- π₯ [CVPR 2024] Official implementation of "See, Say, and Segment: Teaching LMMs to Overcome False Premises (SESAME)"β32Updated 8 months ago
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"β34Updated 3 months ago
- [ECCV2024] Learning Video Context as Interleaved Multimodal Sequencesβ35Updated last month
- Official Repository of Personalized Visual Instruct Tuningβ26Updated 3 months ago
- Repository for the paper: Teaching VLMs to Localize Specific Objects from In-context Examplesβ22Updated 3 months ago
- β16Updated last year
- β27Updated 7 months ago
- [EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"β15Updated 4 months ago
- β14Updated 4 months ago
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Modelsβ30Updated this week
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Modelsβ65Updated 6 months ago
- [CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Promptβ¦β38Updated 2 months ago
- β28Updated 5 months ago
- [ICLR 2025] SAFREE: Training-Free and Adaptive Guard for Safe Text-to-Image and Video Generationβ25Updated last month
- Codes for Paper: Towards Semantic Equivalence of Tokenization in Multimodal LLMβ49Updated 4 months ago
- (NeurIPS 2024) What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insightsβ24Updated 4 months ago
- Official code for the paper, "TaCA: Upgrading Your Visual Foundation Model with Task-agnostic Compatible Adapter".β16Updated last year
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervisionβ31Updated 4 months ago
- β22Updated last year
- Accelerating Vision-Language Pretraining with Free Language Modeling (CVPR 2023)β31Updated last year
- OVMR: Open-Vocabulary Recognition with Multi-Modal References (CVPR24)β25Updated 3 months ago
- β11Updated 7 months ago
- [CVPR 2025] RAP: Retrieval-Augmented Personalizationβ14Updated this week
- This is the official repo for ByteVideoLLM/Dynamic-VLMβ19Updated 2 months ago
- β38Updated 3 months ago
- VisualGPTScore for visio-linguistic reasoningβ27Updated last year
- FreeVA: Offline MLLM as Training-Free Video Assistantβ56Updated 8 months ago
- [NeurIPS'24] I2EBench: A Comprehensive Benchmark for Instruction-based Image Editingβ16Updated 2 months ago
- Distribution-Aware Prompt Tuning for Vision-Language Models (ICCV 2023)β38Updated last year