HubHop / vit-attention-benchmark
Benchmarking Attention Mechanism in Vision Transformers.
β17Updated 2 years ago
Alternatives and similar repositories for vit-attention-benchmark:
Users that are interested in vit-attention-benchmark are comparing it to the libraries listed below
- Paper List for In-context Learning π·β20Updated 2 years ago
- [CVPR 2024] DiffAgent: Fast and Accurate Text-to-Image API Selection with Large Language Modelβ17Updated 11 months ago
- [ECCV 2024] This is the official implementation of "Stitched ViTs are Flexible Vision Backbones".β27Updated last year
- [CVPR2023] This is an official implementation of paper "DETRs with Hybrid Matching".β14Updated 2 years ago
- Accelerating Vision-Language Pretraining with Free Language Modeling (CVPR 2023)β32Updated last year
- (CVPR 2022) Automated Progressive Learning for Efficient Training of Vision Transformersβ25Updated 3 weeks ago
- Official code for the paper, "TaCA: Upgrading Your Visual Foundation Model with Task-agnostic Compatible Adapter".β16Updated last year
- [Arxiv2022] Revitalize Region Feature for Democratizing Video-Language Pre-trainingβ21Updated 3 years ago
- Official Pytorch implementation for Distilling Image Classifiers in Object detection (NeurIPS2021)β31Updated 3 years ago
- SMCA replicationβ21Updated 3 years ago
- Anytime Dense Prediction with Confidence Adaptivity (ICLR 2022)β50Updated 7 months ago
- Official code for "Dynamic Token Normalization Improves Vision Transformer", ICLR 2022.β28Updated 2 years ago
- support Large Vocabulary Instance Segmentation (LVIS) dataset for mmdetectionβ16Updated 4 years ago
- [NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspectiveβ36Updated 2 years ago
- Contrastive Object-level Pre-training with Spatial Noise Curriculum Learningβ20Updated 3 years ago
- A pytorch implementation of the ICCV2021 workshop paper SimDis: Simple Distillation Baselines for Improving Small Self-supervised Modelsβ14Updated 3 years ago
- β43Updated last year
- BESA is a differentiable weight pruning technique for large language models.β14Updated last year
- [CVPR2022 Oral] The official code for "TransRank: Self-supervised Video Representation Learning via Ranking-based Transformation Recognitβ¦β18Updated 2 years ago
- Code and models for the paper Glance-and-Gaze Vision Transformerβ28Updated 3 years ago
- Lightweight Transformer for Multi-modal Tasksβ15Updated 2 years ago
- code base for vision transformersβ36Updated 3 years ago
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effectβ¦β35Updated 9 months ago
- Teach-DETR: Better Training DETR with Teachersβ31Updated last year
- i-mae Pytorch Repoβ20Updated 11 months ago
- SOIT: Segmenting Objects with Instance-Aware Transformersβ14Updated 2 years ago
- Channel Equilibrium Networks for Learning Deep Representation, ICML2020β22Updated 4 years ago
- β52Updated 2 years ago
- β19Updated last year
- Beyond Masking: Demystifying Token-Based Pre-Training for Vision Transformersβ26Updated 2 years ago