828Tina / textvqa_grounding_task_qwen2.5-vl-ftLinks
☆78Updated 6 months ago
Alternatives and similar repositories for textvqa_grounding_task_qwen2.5-vl-ft
Users that are interested in textvqa_grounding_task_qwen2.5-vl-ft are comparing it to the libraries listed below
Sorting:
- New generation of CLIP with fine grained discrimination capability, ICML2025☆497Updated last month
- ☆78Updated 7 months ago
- [ICCV 2025] Official implementation of LLaVA-KD: A Framework of Distilling Multimodal Large Language Models☆113Updated 2 months ago
- ☆100Updated 4 months ago
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆213Updated 8 months ago
- The Codes and Data of A Comprehensive Benchmark for Multimodal Large Language Models in Industrial Anomaly Detection [ICLR'25]☆205Updated 4 months ago
- [NeurIPS-W 2025] Official Implementation of "Seg-R1: Segmentation Can Be Surprisingly Simple with Reinforcement Learning"☆54Updated 5 months ago
- [NeurIPS2025 Spotlight 🔥 ] Official implementation of 🛸 "UFO: A Unified Approach to Fine-grained Visual Perception via Open-ended Langu…☆261Updated last month
- Vision Manus: Your versatile Visual AI assistant☆302Updated 2 months ago
- Open-sourced codes, IAD vision-language datasets and pre-trained checkpoints for Myriad.☆94Updated 5 months ago
- 🔮 UniPixel: Unified Object Referring and Segmentation for Pixel-Level Visual Reasoning (NeurIPS 2025)☆201Updated last month
- Building a VLM model starts from the basic module.☆18Updated last year
- Project Page For "Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement"☆579Updated 4 months ago
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆308Updated 7 months ago
- ☆82Updated 4 months ago
- Collect the awesome works evolved around reasoning models like O1/R1 in visual domain☆49Updated 4 months ago
- [CVPR2025] Project for "HyperSeg: Towards Universal Visual Segmentation with Large Language Model".☆177Updated last year
- 这是一个不基于任何框架实现的从0到1的VLM finetune(包括Pre-train和SFT)☆35Updated 3 months ago
- ☆73Updated 6 months ago
- ☆52Updated 5 months ago
- Fine-tuning Qwen2.5-VL for vision-language tasks | Optimized for Vision understanding | LoRA & PEFT support.☆145Updated 10 months ago
- The official implementation of AA-CLIP: Enhancing Zero-shot Anomaly Detection via Anomaly-Aware CLIP☆204Updated 6 months ago
- [CVPR 2025] PACT: Pruning and Clustering-Based Token Reduction for Faster Visual Language Models☆52Updated 2 months ago
- [NeurIPS 2025] Official code implementation of Perception R1: Pioneering Perception Policy with Reinforcement Learning☆277Updated 5 months ago
- [AAAI 2024] TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training☆105Updated last year
- ☆20Updated 6 months ago
- YOLO-UniOW: Efficient Universal Open-World Object Detection☆170Updated 10 months ago
- [TPAMI 2025] Towards Visual Grounding: A Survey☆267Updated 3 weeks ago
- 多模态 MM +Chat 合集☆279Updated 3 months ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆109Updated 6 months ago