samiraabnar / attention_flow
☆237Updated 3 years ago
Alternatives and similar repositories for attention_flow:
Users that are interested in attention_flow are comparing it to the libraries listed below
- ☆154Updated 2 years ago
- Explainability for Vision Transformers☆926Updated 3 years ago
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decode…☆839Updated last year
- EsViT: Efficient self-supervised Vision Transformers☆410Updated last year
- Concept Bottleneck Models, ICML 2020☆195Updated 2 years ago
- Benchmark your model on out-of-distribution datasets with carefully collected human comparison data (NeurIPS 2021 Oral)☆343Updated 7 months ago
- Self-supervised vIsion Transformer (SiT)☆327Updated 2 years ago
- Multimodal Masked Autoencoders (M3AE): A JAX/Flax Implementation☆102Updated last month
- Full-gradient saliency maps☆208Updated 2 years ago
- ☆117Updated 2 years ago
- Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning☆150Updated 2 years ago
- Visual Language Transformer Interpreter - An interactive visualization tool for interpreting vision-language transformers☆90Updated last year
- [NeurIPS 2021] Official codes for "Efficient Training of Visual Transformers with Small Datasets".☆141Updated 2 months ago
- Official code for ICML 2022: Mitigating Neural Network Overconfidence with Logit Normalization☆147Updated 2 years ago
- [CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize …☆1,858Updated last year
- A Domain-Agnostic Benchmark for Self-Supervised Learning☆107Updated last year
- Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time☆452Updated 8 months ago
- Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)☆457Updated 2 years ago
- (ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"☆812Updated 2 years ago
- [T-PAMI] A curated list of self-supervised multimodal learning resources.☆248Updated 7 months ago
- Experiments with supervised contrastive learning methods with different loss functions☆219Updated 2 years ago
- Probing the representations of Vision Transformers.☆321Updated 2 years ago
- Compare neural networks by their feature similarity☆354Updated last year
- Implementation of Visual Transformer for Small-size Datasets☆120Updated 3 years ago
- PyTorch implementation of SimCLR: supports multi-GPU training and closely reproduces results☆202Updated 10 months ago
- NeurIPS 2020, Debiased Contrastive Learning☆283Updated 2 years ago
- MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts (ICLR 2022)☆109Updated 2 years ago
- B-cos Networks: Alignment is All we Need for Interpretability☆108Updated last year
- Implementation of popular SOTA self-supervised learning algorithms as Fastai Callbacks.☆321Updated last year
- A simple to use pytorch wrapper for contrastive self-supervised learning on any neural network☆133Updated 4 years ago