828Tina / textvqa_grounding_task_qwen2.5-vl-ftLinks
☆65Updated 4 months ago
Alternatives and similar repositories for textvqa_grounding_task_qwen2.5-vl-ft
Users that are interested in textvqa_grounding_task_qwen2.5-vl-ft are comparing it to the libraries listed below
Sorting:
- New generation of CLIP with fine grained discrimination capability, ICML2025☆308Updated last week
- ☆85Updated last month
- ☆72Updated 5 months ago
- The Codes and Data of A Comprehensive Benchmark for Multimodal Large Language Models in Industrial Anomaly Detection [ICLR'25]☆173Updated 2 months ago
- Official Implementation of "Seg-R1: Segmentation Can Be Surprisingly Simple with Reinforcement Learning"☆47Updated 3 months ago
- [ICCV 2025] Official implementation of LLaVA-KD: A Framework of Distilling Multimodal Large Language Models☆99Updated 3 months ago
- Vision Manus: Your versatile Visual AI assistant☆277Updated last week
- 多模态 MM +Chat 合集☆276Updated last month
- [NeurIPS2025 Spotlight 🔥 ] Official implementation of 🛸 "UFO: A Unified Approach to Fine-grained Visual Perception via Open-ended Langu…☆220Updated last week
- Official code implementation of Perception R1: Pioneering Perception Policy with Reinforcement Learning☆261Updated 2 months ago
- YOLO-UniOW: Efficient Universal Open-World Object Detection☆155Updated 8 months ago
- [TPAMI 2025] Towards Visual Grounding: A Survey☆237Updated last month
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆200Updated 6 months ago
- Fine-tuning Qwen2.5-VL for vision-language tasks | Optimized for Vision understanding | LoRA & PEFT support.☆129Updated 8 months ago
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆274Updated 5 months ago
- ☆70Updated last month
- [CVPR2025] Project for "HyperSeg: Towards Universal Visual Segmentation with Large Language Model".☆169Updated 10 months ago
- ☆47Updated 3 months ago
- Project Page For "Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement"☆524Updated 2 months ago
- The official implementation of [CVPR 2025] "5%>100%: Breaking Performance Shackles of Full Fine-Tuning on Visual Recognition Tasks".☆366Updated 3 months ago
- Building a VLM model starts from the basic module.☆18Updated last year
- Open-sourced codes, IAD vision-language datasets and pre-trained checkpoints for Myriad.☆91Updated 3 months ago
- 将SmolVLM2的视觉头与Qwen3-0.6B模型进行了拼接微调☆380Updated last month
- [AAAI 2024] TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training☆102Updated last year
- [ECCV 2024] SegVG: Transferring Object Bounding Box to Segmentation for Visual Grounding☆62Updated 11 months ago
- GroundVLP: Harnessing Zero-shot Visual Grounding from Vision-Language Pre-training and Open-Vocabulary Object Detection (AAAI 2024)☆72Updated last year
- Collect the awesome works evolved around reasoning models like O1/R1 in visual domain☆41Updated 2 months ago
- ☆30Updated last year
- 🔮 UniPixel: Unified Object Referring and Segmentation for Pixel-Level Visual Reasoning (NeurIPS 2025)☆74Updated this week
- Code for paper: Visual Signal Enhancement for Object Hallucination Mitigation in Multimodal Large language Models☆33Updated 9 months ago