all-things-vits / code-samples
Holds code for our CVPR'23 tutorial: All Things ViTs: Understanding and Interpreting Attention in Vision.
☆190Updated last year
Alternatives and similar repositories for code-samples
Users that are interested in code-samples are comparing it to the libraries listed below
Sorting:
- Open source implementation of "Vision Transformers Need Registers"☆176Updated last month
- Official PyTorch implementation of DiffuseMix : Label-Preserving Data Augmentation with Diffusion Models (CVPR'2024)☆113Updated 2 months ago
- Official implementation of 'CLIP-DINOiser: Teaching CLIP a few DINO tricks' paper.☆246Updated 6 months ago
- [CVPR24] Official Implementation of GEM (Grounding Everything Module)☆121Updated last month
- ☆201Updated last year
- The official repo for [TPAMI'23] "Vision Transformer with Quadrangle Attention"☆211Updated last year
- Official Implementation of the CrossMAE paper: Rethinking Patch Dependence for Masked Autoencoders☆109Updated last month
- 1.5−3.0× lossless training or pre-training speedup. An off-the-shelf, easy-to-implement algorithm for the efficient training of foundatio…☆221Updated 8 months ago
- Official implementation of SCLIP: Rethinking Self-Attention for Dense Vision-Language Inference☆156Updated 7 months ago
- A curated list of awesome self-supervised learning methods in videos☆138Updated last week
- (ICLR 2023) Official PyTorch implementation of "What Do Self-Supervised Vision Transformers Learn?"☆109Updated last year
- Learning from synthetic data - code and models☆315Updated last year
- Official implementation and data release of the paper "Visual Prompting via Image Inpainting".☆310Updated last year
- Effective Data Augmentation With Diffusion Models☆244Updated 10 months ago
- [Survey] Masked Modeling for Self-supervised Representation Learning on Vision and Beyond (https://arxiv.org/abs/2401.00897)☆328Updated 3 weeks ago
- This is the official code release for our work, Denoising Vision Transformers.☆362Updated 6 months ago
- Connecting segment-anything's output masks with the CLIP model; Awesome-Segment-Anything-Works☆193Updated 7 months ago
- Object Recognition as Next Token Prediction (CVPR 2024 Highlight)☆177Updated 2 weeks ago
- [NeurIPS'23] DropPos: Pre-Training Vision Transformers by Reconstructing Dropped Positions☆60Updated last year
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆198Updated 4 months ago
- CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks☆420Updated 2 months ago
- Official PyTorch implementation of "Extract Free Dense Labels from CLIP" (ECCV 22 Oral)☆441Updated 2 years ago
- Code for the paper "Hyperbolic Image-Text Representations", Desai et al, ICML 2023☆164Updated last year
- Exploring Visual Prompts for Adapting Large-Scale Models☆279Updated 2 years ago
- [CVPR 2023] Official repository of Generative Semantic Segmentation☆213Updated last year
- 1-shot image segmentation using Stable Diffusion☆138Updated last year
- [CVPR 2023] CLIP is Also an Efficient Segmenter: A Text-Driven Approach for Weakly Supervised Semantic Segmentation☆193Updated 8 months ago
- official implementation of "Interpreting CLIP's Image Representation via Text-Based Decomposition"☆211Updated 5 months ago
- ☆65Updated 7 months ago
- Code for Scaling Language-Free Visual Representation Learning (WebSSL)☆244Updated 3 weeks ago