all-things-vits / code-samplesLinks
Holds code for our CVPR'23 tutorial: All Things ViTs: Understanding and Interpreting Attention in Vision.
☆196Updated 2 years ago
Alternatives and similar repositories for code-samples
Users that are interested in code-samples are comparing it to the libraries listed below
Sorting:
- ☆210Updated 2 years ago
- Effective Data Augmentation With Diffusion Models☆268Updated last year
- Official PyTorch implementation of DiffuseMix : Label-Preserving Data Augmentation with Diffusion Models (CVPR'2024)☆129Updated 9 months ago
- Open source implementation of "Vision Transformers Need Registers"☆202Updated 2 months ago
- Official implementation of 'CLIP-DINOiser: Teaching CLIP a few DINO tricks' paper.☆268Updated last year
- [CVPR24] Official Implementation of GEM (Grounding Everything Module)☆134Updated 8 months ago
- Official Implementation of the CrossMAE paper: Rethinking Patch Dependence for Masked Autoencoders☆128Updated 8 months ago
- This repo is the code of paper "DiffusionInst: Diffusion Model for Instance Segmentation" (ICASSP'24).☆243Updated 11 months ago
- [CVPR'23 & TPAMI'25] Hard Patches Mining for Masked Image Modeling & Bootstrap Masked Visual Modeling via Hard Patch Mining☆106Updated 8 months ago
- Official implementation of SCLIP: Rethinking Self-Attention for Dense Vision-Language Inference☆180Updated last year
- [CVPR 2023] This repository includes the official implementation our paper "Masked Autoencoders Enable Efficient Knowledge Distillers"☆108Updated 2 years ago
- 1.5−3.0× lossless training or pre-training speedup. An off-the-shelf, easy-to-implement algorithm for the efficient training of foundatio…☆226Updated last year
- This is a PyTorch implementation of “Context AutoEncoder for Self-Supervised Representation Learning"☆120Updated 2 years ago
- (ICLR 2023) Official PyTorch implementation of "What Do Self-Supervised Vision Transformers Learn?"☆114Updated last year
- This is Pytorch Implementation Code for adding new features in code of Segment-Anything. Here, the features support batch-input on the fu…☆166Updated 2 years ago
- PyTorch reimplementation of FlexiViT: One Model for All Patch Sizes☆65Updated last year
- Augmenting with Language-guided Image Augmentation (ALIA)☆80Updated 2 years ago
- [Pattern Recognition 25] CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks☆456Updated 9 months ago
- The official repo for [TPAMI'23] "Vision Transformer with Quadrangle Attention"☆227Updated 3 months ago
- ICCV 2023: CLIPN for Zero-Shot OOD Detection: Teaching CLIP to Say No☆142Updated 2 years ago
- Dataset Diffusion: Diffusion-based Synthetic Data Generation for Pixel-Level Semantic Segmentation (NeurIPS2023)☆127Updated last year
- Learning from synthetic data - code and models☆326Updated last year
- [ECCV 2022] What to Hide from Your Students: Attention-Guided Masked Image Modeling☆74Updated last year
- official implementation of "Interpreting CLIP's Image Representation via Text-Based Decomposition"☆234Updated 6 months ago
- [NeurIPS'23] DropPos: Pre-Training Vision Transformers by Reconstructing Dropped Positions☆62Updated last year
- ImageNet-1K data download, processing for using as a dataset☆126Updated 2 years ago
- Code for "Training on Thin Air: Improve Image Classification with Generated Data"☆48Updated 2 years ago
- Connecting segment-anything's output masks with the CLIP model; Awesome-Segment-Anything-Works☆202Updated last year
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆223Updated 3 years ago
- Code for the paper "Hyperbolic Image-Text Representations", Desai et al, ICML 2023☆191Updated 2 years ago