New generation of CLIP with strong fine grained discrimination capability, ICML2025
☆559Oct 27, 2025Updated 4 months ago
Alternatives and similar repositories for FG-CLIP
Users that are interested in FG-CLIP are comparing it to the libraries listed below
Sorting:
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR 2025]☆597Updated this week
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆892Aug 13, 2024Updated last year
- FineCLIP: Self-distilled Region-based CLIP for Better Fine-grained Understanding (NIPS24)☆37Nov 12, 2025Updated 4 months ago
- ☆34Oct 9, 2025Updated 5 months ago
- LMM solved catastrophic forgetting, AAAI2025☆46Apr 15, 2025Updated 11 months ago
- (CVPR 2025 highlight✨) Official repository of paper "LLMDet: Learning Strong Open-Vocabulary Object Detectors under the Supervision of La…☆566Feb 4, 2026Updated last month
- AlignCLIP: Improving Cross-Modal Alignment in CLIP (ICLR 2025)☆60Mar 1, 2025Updated last year
- Solve Visual Understanding with Reinforced VLMs☆5,872Mar 12, 2026Updated last week
- [ACL 2025 Oral] 🔥🔥 MegaPairs: Massive Data Synthesis for Universal Multimodal Retrieval☆245Nov 6, 2025Updated 4 months ago
- [ICCV 2025] Object-centric Video Question Answering with Visual Grounding and Referring☆25Aug 8, 2025Updated 7 months ago
- LLM2CLIP significantly improves already state-of-the-art CLIP models.☆643Feb 1, 2026Updated last month
- Code repository for "Post-pre-training for Modality Alignment in Vision-Language Foundation Models" (CVPR2025)☆38Jul 25, 2025Updated 7 months ago
- This is the official code repository for the paper: Towards General Continuous Memory for Vision-Language Models.☆23Jul 3, 2025Updated 8 months ago
- [AAAI 2026 Oral] The official code of "UniME-V2: MLLM-as-a-Judge for Universal Multimodal Embedding Learning"☆68Dec 8, 2025Updated 3 months ago
- Vision-Language Dataset for Remote Sensing☆41May 27, 2025Updated 9 months ago
- Code and updates for the ScoreRS project.☆42Sep 19, 2025Updated 6 months ago
- Collection of Composed Image Retrieval (CIR) papers.☆322Dec 22, 2025Updated 3 months ago
- Official repository of 'Visual-RFT: Visual Reinforcement Fine-Tuning' & 'Visual-ARFT: Visual Agentic Reinforcement Fine-Tuning'’☆2,317Oct 29, 2025Updated 4 months ago
- ☆18Jun 10, 2025Updated 9 months ago
- [CVPR 2025] DeCLIP: Decoupled Learning for Open-Vocabulary Dense Perception☆153Jan 10, 2026Updated 2 months ago
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆353Apr 20, 2025Updated 11 months ago
- Reference PyTorch implementation and models for DINOv3☆9,878Mar 11, 2026Updated last week
- [NeurIPS 24] A new training and evaluation framework for learning interpretable deep vision models and benchmarking different interpretab…☆30Jun 5, 2025Updated 9 months ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆869Jul 20, 2025Updated 8 months ago
- [CVPR 2026] SpatialScore: Towards Comprehensive Evaluation for Spatial Intelligence☆66Jul 9, 2025Updated 8 months ago
- Official implementation of "VIRAL: Visual Representation Alignment for MLLMs".☆152Sep 21, 2025Updated 6 months ago
- Enhancing Ultrahigh Resolution Remote Sensing Imagery Analysis With ImageRAG [GRSM]☆29Feb 4, 2026Updated last month
- GeoGround: A Unified Large Vision-Language Model for Remote Sensing Visual Grounding☆80May 10, 2025Updated 10 months ago
- Our 2nd-gen LMM☆34May 22, 2024Updated last year
- ☆24Jul 8, 2023Updated 2 years ago
- ☆126Dec 26, 2025Updated 2 months ago
- The official code of "Towards Unified Text-based Person Retrieval: A Large-scale Multi-Attribute and Language Search Benchmark"☆170Jul 23, 2025Updated 7 months ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,706Feb 11, 2026Updated last month
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understanding☆50Jan 14, 2025Updated last year
- YOLO-UniOW: Efficient Universal Open-World Object Detection☆177Jan 17, 2025Updated last year
- Official implementation of BLIP3o-Series☆1,653Nov 29, 2025Updated 3 months ago
- [CBMI 2024 Best Paper] Official repository of the paper "Is CLIP the main roadblock for fine-grained open-world perception?".☆32May 12, 2025Updated 10 months ago
- ☆39Jan 12, 2026Updated 2 months ago
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆2,204Mar 12, 2026Updated last week