MacavityT / REF-VLMLinks
☆30Updated 7 months ago
Alternatives and similar repositories for REF-VLM
Users that are interested in REF-VLM are comparing it to the libraries listed below
Sorting:
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆60Updated 2 months ago
- ☆90Updated 3 months ago
- Official Implementation of "Pix2Cap-COCO: Advancing Visual Comprehension via Pixel-Level Captioning"☆19Updated 8 months ago
- [AAAI2025] ChatterBox: Multi-round Multimodal Referring and Grounding, Multimodal, Multi-round dialogues☆57Updated 5 months ago
- [ICML 2025] VistaDPO: Video Hierarchical Spatial-Temporal Direct Preference Optimization for Large Video Models☆35Updated 3 months ago
- PyTorch code for "ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language Tuning"☆20Updated 11 months ago
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆48Updated 2 months ago
- [EMNLP-2025 Oral] ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆57Updated last month
- Offical repo for CAT-V - Caption Anything in Video: Object-centric Dense Video Captioning with Spatiotemporal Multimodal Prompting☆54Updated 3 months ago
- Official implementation of "TextRegion: Text-Aligned Region Tokens from Frozen Image-Text Models"☆46Updated 2 months ago
- [NeurIPS 2025] The official repository of "Inst-IT: Boosting Multimodal Instance Understanding via Explicit Visual Prompt Instruction Tun…☆37Updated 7 months ago
- [NeurIPS 2024] TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration☆24Updated 11 months ago
- ☆57Updated 3 months ago
- [EMNLP 2023] TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding☆50Updated last year
- Project for "LaSagnA: Language-based Segmentation Assistant for Complex Queries".☆60Updated last year
- [NeurIPS 2025] Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆62Updated 2 weeks ago
- [ECCV 2024] Elysium: Exploring Object-level Perception in Videos via MLLM☆82Updated 11 months ago
- Official code of the paper "VideoMolmo: Spatio-Temporal Grounding meets Pointing"☆50Updated 3 months ago
- Uni-CoT: Towards Unified Chain-of-Thought Reasoning Across Text and Vision☆150Updated 2 weeks ago
- Benchmarking Video-LLMs on Video Spatio-Temporal Reasoning☆27Updated 2 months ago
- ☆43Updated last year
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequences☆40Updated 6 months ago
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆49Updated last month
- ☆53Updated 8 months ago
- ☆39Updated 4 months ago
- Code for the paper "Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers" [ICCV 2025]☆89Updated 2 months ago
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆87Updated 2 months ago
- Quick Long Video Understanding☆64Updated 3 months ago
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆32Updated 6 months ago
- https://huggingface.co/datasets/multimodal-reasoning-lab/Zebra-CoT☆79Updated 2 months ago