jonkahana / CLIPPRLinks
An official PyTorch implementation for CLIPPR
☆29Updated 2 years ago
Alternatives and similar repositories for CLIPPR
Users that are interested in CLIPPR are comparing it to the libraries listed below
Sorting:
- ☆35Updated last year
- Official PyTorch Implementation for the "Distilling Datasets Into Less Than One Image" paper.☆38Updated last year
- Original code base for On Pretraining Data Diversity for Self-Supervised Learning☆14Updated 9 months ago
- Test-Time Distribution Normalization For Contrastively Learned Vision-language Models☆27Updated last year
- Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"☆36Updated last year
- [NeurIPS'24] Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization☆35Updated last year
- This is a PyTorch implementation of the paperViP A Differentially Private Foundation Model for Computer Vision☆36Updated 2 years ago
- ☆53Updated 3 years ago
- ☆30Updated 2 years ago
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆58Updated 9 months ago
- Code for T-MARS data filtering☆35Updated 2 years ago
- This is an implementation of the paper "Are We Done with Object-Centric Learning?"☆11Updated 3 weeks ago
- ☆34Updated 2 years ago
- Official code for the paper "Does CLIP's Generalization Performance Mainly Stem from High Train-Test Similarity?" (ICLR 2024)☆10Updated last year
- [WACV2023] This is the official PyTorch impelementation of our paper "[Rethinking Rotation in Self-Supervised Contrastive Learning: Adapt…☆12Updated 2 years ago
- [TIP] Exploring Effective Factors for Improving Visual In-Context Learning☆19Updated 3 months ago
- [ICLR 23] Contrastive Aligned of Vision to Language Through Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- [ICCV23] Official implementation of eP-ALM: Efficient Perceptual Augmentation of Language Models.☆27Updated last year
- Official repo for the TMLR paper "Discffusion: Discriminative Diffusion Models as Few-shot Vision and Language Learners"☆30Updated last year
- Code base of SynthCLIP: CLIP training with purely synthetic text-image pairs from LLMs and TTIs.☆100Updated 6 months ago
- ☆53Updated last month
- Code for "Are “Hierarchical” Visual Representations Hierarchical?" in NeurIPS Workshop for Symmetry and Geometry in Neural Representation…☆21Updated last year
- This is a offical PyTorch/GPU implementation of SupMAE.☆78Updated 3 years ago
- ☆29Updated 2 years ago
- ☆24Updated 2 years ago
- [ECCV’24] Official repository for "BEAF: Observing Before-AFter Changes to Evaluate Hallucination in Vision-language Models"☆20Updated 6 months ago
- ☆25Updated 2 years ago
- Code for the paper Self-Supervised Learning of Split Invariant Equivariant Representations☆29Updated 2 years ago
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11Updated 2 years ago
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆17Updated last year