apple / ml-tic-clipLinks
Repository for the paper: "TiC-CLIP: Continual Training of CLIP Models" ICLR 2024
☆109Updated last year
Alternatives and similar repositories for ml-tic-clip
Users that are interested in ml-tic-clip are comparing it to the libraries listed below
Sorting:
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated last year
- PyTorch Implementation of Object Recognition as Next Token Prediction [CVPR'24 Highlight]☆181Updated 7 months ago
- Code for our ICLR 2024 paper "PerceptionCLIP: Visual Classification by Inferring and Conditioning on Contexts"☆79Updated last year
- Official implementation of "Describing Differences in Image Sets with Natural Language" (CVPR 2024 Oral)☆129Updated last month
- This is a public repository for Image Clustering Conditioned on Text Criteria (IC|TC)☆92Updated last year
- Code base of SynthCLIP: CLIP training with purely synthetic text-image pairs from LLMs and TTIs.☆101Updated 9 months ago
- [ICML 2024] This repository includes the official implementation of our paper "Rejuvenating image-GPT as Strong Visual Representation Lea…☆98Updated last year
- ☆53Updated 11 months ago
- Official code for "TOAST: Transfer Learning via Attention Steering"☆188Updated 2 years ago
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆248Updated 11 months ago
- An official PyTorch implementation for CLIPPR☆29Updated 2 years ago
- This is a PyTorch implementation of the paperViP A Differentially Private Foundation Model for Computer Vision☆36Updated 2 years ago
- ☆35Updated last year
- Matryoshka Multimodal Models☆121Updated 11 months ago
- RO-ViT CVPR 2023 "Region-Aware Pretraining for Open-Vocabulary Object Detection with Vision Transformers"☆18Updated 2 years ago
- Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"☆57Updated last year
- Official code repo of PIN: Positional Insert Unlocks Object Localisation Abilities in VLMs☆26Updated 11 months ago
- ☆56Updated 2 years ago
- How Good is Google Bard's Visual Understanding? An Empirical Study on Open Challenges☆30Updated 2 years ago
- ☆40Updated last year
- Code release for "Improved baselines for vision-language pre-training"☆61Updated last year
- Original code base for On Pretraining Data Diversity for Self-Supervised Learning☆14Updated last year
- [ECCV 2024] Official Release of SILC: Improving vision language pretraining with self-distillation☆47Updated last year
- [NeurIPS 2024] Official implementation of the paper "Interfacing Foundation Models' Embeddings"☆129Updated last year
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆61Updated last year
- Patching open-vocabulary models by interpolating weights☆91Updated 2 years ago
- ☆65Updated 2 years ago
- ☆23Updated 11 months ago
- [ACL2025] Unsolvable Problem Detection: Robust Understanding Evaluation for Large Multimodal Models☆79Updated 7 months ago
- ☆72Updated 5 months ago