pals-ttic / adapting-CLIP
β64Updated last year
Alternatives and similar repositories for adapting-CLIP:
Users that are interested in adapting-CLIP are comparing it to the libraries listed below
- β59Updated 3 years ago
- β56Updated this week
- π Official pytorch implementation of paper "ProtoCLIP: Prototypical Contrastive Language Image Pretraining" (IEEE TNNLS)β52Updated last year
- This repository provides data for the VAW dataset as described in the CVPR 2021 paper titled "Learning to Predict Visual Attributes in thβ¦β65Updated 2 years ago
- Official Pytorch codebase for Open-Vocabulary Instance Segmentation without Manual Mask Annotations [CVPR 2023]β50Updated 4 months ago
- [NeurIPS2023] Official implementation and model release of the paper "What Makes Good Examples for Visual In-Context Learning?"β174Updated last year
- Compress conventional Vision-Language Pre-training dataβ49Updated last year
- Open-Vocabulary Instance Segmentation via Robust Cross-Modal Pseudo-Labeling @ CVPR22β41Updated 2 years ago
- [CVPR 2022] Visual Abductive Reasoningβ122Updated 6 months ago
- PyTorch implementation of the paper "MILAN: Masked Image Pretraining on Language Assisted Representation" https://arxiv.org/pdf/2208.0604β¦β82Updated 2 years ago
- This repo is the official implementation of UPL (Unsupervised Prompt Learning for Vision-Language Models).β116Updated 3 years ago
- Official Implementation for paper "Referring Transformer: A One-step Approach to Multi-task Visual Grounding" Neurips 2021β66Updated 2 years ago
- [ICLR2024] Exploring Target Representations for Masked Autoencodersβ55Updated last year
- β27Updated last year
- A Python toolkit for the OmniLabel benchmark providing code for evaluation and visualizationβ21Updated 3 months ago
- Obj2Seq: Formatting Objects as Sequences with Class Prompt for Visual Tasks (NeurIPS2022)β84Updated 2 years ago
- [CVPR2023] The code for γPosition-guided Text Prompt for Vision-Language Pre-trainingγβ152Updated last year
- β83Updated 3 years ago
- β62Updated 3 years ago
- β30Updated last year
- Official Code of ECCV 2022 paper MS-CLIPβ89Updated 2 years ago
- Toolkit for Elevater Benchmarkβ70Updated last year
- Code for the paper: "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models" [ICCV'23]β99Updated last year
- Code for the paper titled "CiT Curation in Training for Effective Vision-Language Data".β78Updated 2 years ago
- β50Updated 2 years ago
- β59Updated last year
- [NeurIPS 2021] ORL: Unsupervised Object-Level Representation Learning from Scene Imagesβ58Updated 3 years ago
- [CVPR-2023] The official dataset of Advancing Visual Grounding with Scene Knowledge: Benchmark and Method.β30Updated last year
- UniTAB: Unifying Text and Box Outputs for Grounded VL Modeling, ECCV 2022 (Oral Presentation)β86Updated last year
- Official codes for ConMIM (ICLR 2023)β58Updated 2 years ago