apple / ml-tic-clipLinks
Repository for the paper: "TiC-CLIP: Continual Training of CLIP Models" ICLR 2024
☆110Updated last year
Alternatives and similar repositories for ml-tic-clip
Users that are interested in ml-tic-clip are comparing it to the libraries listed below
Sorting:
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆102Updated last year
- Official code for "TOAST: Transfer Learning via Attention Steering"☆188Updated 2 years ago
- PyTorch Implementation of Object Recognition as Next Token Prediction [CVPR'24 Highlight]☆182Updated 8 months ago
- This is a PyTorch implementation of the paperViP A Differentially Private Foundation Model for Computer Vision☆36Updated 2 years ago
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆248Updated 11 months ago
- This is a public repository for Image Clustering Conditioned on Text Criteria (IC|TC)☆92Updated last year
- Official implementation of "Describing Differences in Image Sets with Natural Language" (CVPR 2024 Oral)☆130Updated 2 months ago
- Code for our ICLR 2024 paper "PerceptionCLIP: Visual Classification by Inferring and Conditioning on Contexts"☆79Updated last year
- Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"☆57Updated last year
- Matryoshka Multimodal Models☆121Updated 11 months ago
- ☆40Updated last year
- [ICML 2024] This repository includes the official implementation of our paper "Rejuvenating image-GPT as Strong Visual Representation Lea…☆98Updated last year
- Code release for "Improved baselines for vision-language pre-training"☆62Updated last year
- [NeurIPS 2024] Official implementation of the paper "Interfacing Foundation Models' Embeddings"☆129Updated last year
- [ECCV 2024] Official Release of SILC: Improving vision language pretraining with self-distillation☆47Updated last year
- Official code repo of PIN: Positional Insert Unlocks Object Localisation Abilities in VLMs☆26Updated last year
- ☆56Updated 2 years ago
- ☆35Updated last year
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆103Updated 2 years ago
- ☆59Updated last year
- ☆65Updated 2 years ago
- Code base of SynthCLIP: CLIP training with purely synthetic text-image pairs from LLMs and TTIs.☆102Updated 9 months ago
- ☆54Updated last year
- RO-ViT CVPR 2023 "Region-Aware Pretraining for Open-Vocabulary Object Detection with Vision Transformers"☆18Updated 2 years ago
- Video descriptions of research papers relating to foundation models and scaling☆30Updated 2 years ago
- ☆191Updated last year
- Evaluation and dataset construction code for the CVPR 2025 paper "Vision-Language Models Do Not Understand Negation"☆44Updated 8 months ago
- [CVPR 2023] HierVL Learning Hierarchical Video-Language Embeddings☆46Updated 2 years ago
- Code for the paper "Hyperbolic Image-Text Representations", Desai et al, ICML 2023☆194Updated 2 years ago
- Official repository for the General Robust Image Task (GRIT) Benchmark☆54Updated 2 years ago