HubHop / vit-attention-benchmarkLinks
Benchmarking Attention Mechanism in Vision Transformers.
β18Updated 2 years ago
Alternatives and similar repositories for vit-attention-benchmark
Users that are interested in vit-attention-benchmark are comparing it to the libraries listed below
Sorting:
- Paper List for In-context Learning π·β20Updated 2 years ago
- [ECCV 2024] This is the official implementation of "Stitched ViTs are Flexible Vision Backbones".β27Updated last year
- (CVPR 2022) Automated Progressive Learning for Efficient Training of Vision Transformersβ25Updated 4 months ago
- [CVPR 2024] DiffAgent: Fast and Accurate Text-to-Image API Selection with Large Language Modelβ17Updated last year
- Teach-DETR: Better Training DETR with Teachersβ31Updated last year
- [CVPR2023] This is an official implementation of paper "DETRs with Hybrid Matching".β14Updated 2 years ago
- β57Updated 4 years ago
- [ICLR 2022] "As-ViT: Auto-scaling Vision Transformers without Training" by Wuyang Chen, Wei Huang, Xianzhi Du, Xiaodan Song, Zhangyang Waβ¦β76Updated 3 years ago
- [WACV2025 Oral] DeepMIM: Deep Supervision for Masked Image Modelingβ53Updated 2 months ago
- Contrastive Object-level Pre-training with Spatial Noise Curriculum Learningβ20Updated 3 years ago
- Anytime Dense Prediction with Confidence Adaptivity (ICLR 2022)β50Updated 10 months ago
- This repo is the official megengine implementation of the ECCV2022 paper: Efficient One Pass Self-distillation with Zipf's Label Smoothinβ¦β26Updated 2 years ago
- Official Pytorch implementation of Super Vision Transformer (IJCV)β43Updated last year
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Pruninβ¦β40Updated 2 years ago
- i-mae Pytorch Repoβ20Updated last year
- [NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspectiveβ37Updated 2 years ago
- code base for vision transformersβ36Updated 3 years ago
- Channel Equilibrium Networks for Learning Deep Representation, ICML2020β22Updated 4 years ago
- Large-batch Optimization for Dense Visual Predictions (NeurIPS 2022)β57Updated 2 years ago
- Official Pytorch implementation for Distilling Image Classifiers in Object detection (NeurIPS2021)β31Updated 3 years ago
- Official code for "Dynamic Token Normalization Improves Vision Transformer", ICLR 2022.β28Updated 3 years ago
- This is the official PyTorch implementation for "Mesa: A Memory-saving Training Framework for Transformers".β120Updated 3 years ago
- Beyond Masking: Demystifying Token-Based Pre-Training for Vision Transformersβ26Updated 3 years ago
- β16Updated 2 years ago
- Code of our Neurips2020 paper "Auto Learning Attention", coming soonβ22Updated 4 years ago
- Accelerating Vision-Language Pretraining with Free Language Modeling (CVPR 2023)β32Updated 2 years ago
- A pytorch implementation of the ICCV2021 workshop paper SimDis: Simple Distillation Baselines for Improving Small Self-supervised Modelsβ14Updated 4 years ago
- Code and models for the paper Glance-and-Gaze Vision Transformerβ28Updated 4 years ago
- β38Updated last year
- Code for Point-Level Regin Contrast (https//arxiv.org/abs/2202.04639)β35Updated 2 years ago