hsouri / Battle-of-the-Backbones
☆198Updated last year
Alternatives and similar repositories for Battle-of-the-Backbones:
Users that are interested in Battle-of-the-Backbones are comparing it to the libraries listed below
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆100Updated 5 months ago
- ☆182Updated last year
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆309Updated 8 months ago
- Learning from synthetic data - code and models☆310Updated last year
- Holds code for our CVPR'23 tutorial: All Things ViTs: Understanding and Interpreting Attention in Vision.☆181Updated last year
- Object Recognition as Next Token Prediction (CVPR 2024 Highlight)☆172Updated last month
- A framework for merging models solving different tasks with different initializations into one multi-task model without any additional tr…☆293Updated last year
- PyTorch code for hierarchical k-means -- a data curation method for self-supervised learning☆144Updated 8 months ago
- ☆497Updated 3 months ago
- When do we not need larger vision models?☆368Updated last week
- Official implementation of 'CLIP-DINOiser: Teaching CLIP a few DINO tricks' paper.☆233Updated 3 months ago
- 1.5−3.0× lossless training or pre-training speedup. An off-the-shelf, easy-to-implement algorithm for the efficient training of foundatio…☆217Updated 5 months ago
- Code release for "Dropout Reduces Underfitting"☆312Updated last year
- ☆62Updated 4 months ago
- Simple Implementation of Pix2Seq model for object detection in PyTorch☆122Updated last year
- [CVPR24] Official Implementation of GEM (Grounding Everything Module)☆109Updated 3 months ago
- Code release for "Improved baselines for vision-language pre-training"☆60Updated 9 months ago
- [NeurIPS2023] Code release for "Hierarchical Open-vocabulary Universal Image Segmentation"☆281Updated 10 months ago
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆159Updated last year
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆192Updated last month
- [NeurIPS 2024] Official implementation of the paper "Interfacing Foundation Models' Embeddings"☆120Updated 6 months ago
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆239Updated 3 weeks ago
- [ECCV2024 Oral🔥] Official Implementation of "GiT: Towards Generalist Vision Transformer through Universal Language Interface"☆329Updated last month
- [CVPR 2022] Deep Spectral Methods: A Surprisingly Strong Baseline for Unsupervised Semantic Segmentation and Localization☆233Updated 2 years ago
- Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time☆446Updated 7 months ago
- PyTorch implementation of R-MAE https//arxiv.org/abs/2306.05411☆110Updated last year
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆211Updated 2 years ago
- (ICLR 2023) Official PyTorch implementation of "What Do Self-Supervised Vision Transformers Learn?"☆105Updated 11 months ago
- Open source implementation of "Vision Transformers Need Registers"☆163Updated 3 weeks ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆950Updated 11 months ago