☆24Jul 8, 2023Updated 2 years ago
Alternatives and similar repositories for Patch-Aligned-Contrastive-Learning
Users that are interested in Patch-Aligned-Contrastive-Learning are comparing it to the libraries listed below
Sorting:
- [CVPR'24] Code for Emergent Open-Vocabulary Semantic Segmentation from Off-the-shelf Vision-Language Models☆18Jul 22, 2024Updated last year
- Reviews of papers on ML, DL, Statistics, Optimization, etc.☆12Aug 2, 2021Updated 4 years ago
- Progressive Language-guided Visual Learning for Multi-Task Visual Grounding☆13May 9, 2025Updated 10 months ago
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆80Oct 25, 2024Updated last year
- MLLMSeg: Unlocking the Potential of MLLMs in Referring Expression Segmentation via a Light-weight Mask Decoder☆51Aug 16, 2025Updated 7 months ago
- This repo is the official pytorch implementation of the paper: CLIPer: Hierarchically Improving Spatial Representation of CLIP for Open-V…☆40Sep 10, 2025Updated 6 months ago
- CVPR25☆27Jul 2, 2025Updated 8 months ago
- [EMNLP 2024] Preserving Multi-Modal Capabilities of Pre-trained VLMs for Improving Vision-Linguistic Compositionality☆21Oct 8, 2024Updated last year
- ☆17Oct 11, 2022Updated 3 years ago
- The official source code of our AAAI25 paper "D&M: Enriching E-commerce Videos with Sound Effects by Key Moment Detection and SFX Matchin…☆10Feb 9, 2025Updated last year
- [ICCV 2023] ViLLA: Fine-grained vision-language representation learning from real-world data☆45Oct 15, 2023Updated 2 years ago
- Pytorch reproduction of paper "Hierarchical Object Detection with Deep Reinforcement Learning"☆10Oct 3, 2023Updated 2 years ago
- cliptrase☆47Sep 1, 2024Updated last year
- [ICCV 2023] Going Beyond Nouns With Vision & Language Models Using Synthetic Data☆13Sep 30, 2023Updated 2 years ago
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆109May 29, 2025Updated 9 months ago
- [ECCV 2024] Prompting Language-Informed Distribution for Compositional Zero-Shot Learning☆15Jan 4, 2025Updated last year
- Official Pytorch Implementation of Paper "A Semantic Space is Worth 256 Language Descriptions: Make Stronger Segmentation Models with Des…☆55Aug 27, 2025Updated 6 months ago
- Official implementation of the WACV 2024 paper CLIP-DIY☆35Dec 20, 2023Updated 2 years ago
- ☆11Oct 2, 2024Updated last year
- FreeDA: Training-Free Open-Vocabulary Segmentation with Offline Diffusion-Augmented Prototype Generation (CVPR 2024)☆49Aug 28, 2024Updated last year
- RefVOS☆28Feb 3, 2021Updated 5 years ago
- Code for "CARIS: Context-Aware Referring Image Segmentation" [ACM MM2023]☆28Nov 28, 2024Updated last year
- [ECCV2024] ProxyCLIP: Proxy Attention Improves CLIP for Open-Vocabulary Segmentation☆112Mar 26, 2025Updated 11 months ago
- ☆161Jul 19, 2023Updated 2 years ago
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understanding☆50Jan 14, 2025Updated last year
- [ACM MM 2024] Hierarchical Multimodal Fine-grained Modulation for Visual Grounding.☆60Nov 10, 2025Updated 4 months ago
- ☆10Jan 9, 2025Updated last year
- [ECCV 2024] FlexAttention for Efficient High-Resolution Vision-Language Models☆46Jan 8, 2025Updated last year
- [ECCV2024] ClearCLIP: Decomposing CLIP Representations for Dense Vision-Language Inference☆98Mar 26, 2025Updated 11 months ago
- Official codebase for the paper "Reasoning Within the Mind: Dynamic Multimodal Interleaving in Latent Space"☆69Dec 17, 2025Updated 3 months ago
- [CVPR 2025] PyTorch implementation of paper "FLAME: Frozen Large Language Models Enable Data-Efficient Language-Image Pre-training"☆33Jul 8, 2025Updated 8 months ago
- ☆20Mar 6, 2023Updated 3 years ago
- Find ground breaking 3D point cloud analysis papers☆13Jul 28, 2020Updated 5 years ago
- ☆10Nov 29, 2022Updated 3 years ago
- [NeurIPS 2024] OneRef: Unified One-tower Expression Grounding and Segmentation with Mask Referring Modeling.☆31Nov 13, 2025Updated 4 months ago
- [ICCV 2025] Dynamic-VLM☆28Dec 16, 2024Updated last year
- Code for the paper: "No Zero-Shot Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance" [NeurI…☆94Apr 29, 2024Updated last year
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆35Apr 27, 2023Updated 2 years ago
- Official repository of the paper "MIRAGE: A multimodal foundation model and benchmark for comprehensive retinal OCT image analysis", publ…☆37Oct 17, 2025Updated 5 months ago