828Tina / textvqa_grounding_task_qwen2.5-vl-ftLinks
☆83Updated 8 months ago
Alternatives and similar repositories for textvqa_grounding_task_qwen2.5-vl-ft
Users that are interested in textvqa_grounding_task_qwen2.5-vl-ft are comparing it to the libraries listed below
Sorting:
- New generation of CLIP with fine grained discrimination capability, ICML2025☆541Updated 3 months ago
- ☆83Updated 9 months ago
- ☆106Updated 5 months ago
- [NeurIPS2025 Spotlight 🔥 ] Official implementation of 🛸 "UFO: A Unified Approach to Fine-grained Visual Perception via Open-ended Langu…☆264Updated 2 months ago
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆221Updated 10 months ago
- [ICCV 2025] Official implementation of LLaVA-KD: A Framework of Distilling Multimodal Large Language Models☆124Updated 3 months ago
- The Codes and Data of A Comprehensive Benchmark for Multimodal Large Language Models in Industrial Anomaly Detection [ICLR'25]☆229Updated 2 weeks ago
- Vision Manus: Your versatile Visual AI assistant☆315Updated 2 weeks ago
- 🔮 UniPixel: Unified Object Referring and Segmentation for Pixel-Level Visual Reasoning (NeurIPS 2025)☆224Updated 3 weeks ago
- [TPAMI 2025] Towards Visual Grounding: A Survey☆284Updated 2 months ago
- ☆52Updated 6 months ago
- The official implementation of [CVPR 2025] "5%>100%: Breaking Performance Shackles of Full Fine-Tuning on Visual Recognition Tasks".☆391Updated 7 months ago
- Open-sourced codes, IAD vision-language datasets and pre-trained checkpoints for Myriad.☆95Updated 6 months ago
- Project Page For "Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement"☆592Updated 2 weeks ago
- [NeurIPS-W 2025] Official Implementation of "Seg-R1: Segmentation Can Be Surprisingly Simple with Reinforcement Learning"☆58Updated 7 months ago
- [CVPR2025] Project for "HyperSeg: Towards Universal Visual Segmentation with Large Language Model".☆178Updated last year
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆331Updated 9 months ago
- 多模态 MM +Chat 合集☆281Updated 5 months ago
- YOLO-UniOW: Efficient Universal Open-World Object Detection☆175Updated last year
- [AAAI 2024] TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training☆106Updated 2 years ago
- ☆114Updated last month
- Building a VLM model starts from the basic module.☆18Updated last year
- [NeurIPS 2025] Official code implementation of Perception R1: Pioneering Perception Policy with Reinforcement Learning☆285Updated 6 months ago
- Collect the awesome works evolved around reasoning models like O1/R1 in visual domain☆52Updated 6 months ago
- Fine-tuning Qwen2.5-VL for vision-language tasks | Optimized for Vision understanding | LoRA & PEFT support.☆149Updated 11 months ago
- HiMTok: Learning Hierarchical Mask Tokens for Image Segmentation with Large Multimodal Model☆85Updated 6 months ago
- The official implementation of AA-CLIP: Enhancing Zero-shot Anomaly Detection via Anomaly-Aware CLIP☆223Updated 8 months ago
- The code of the paper "NExT-Chat: An LMM for Chat, Detection and Segmentation".☆253Updated last year
- Mamba-YOLO-World: Marrying YOLO-World with Mamba for Open-Vocabulary Detection☆95Updated 10 months ago
- [ECCV 2024] SegVG: Transferring Object Bounding Box to Segmentation for Visual Grounding☆63Updated last year