HubHop / vit-attention-benchmarkLinks
Benchmarking Attention Mechanism in Vision Transformers.
β18Updated 2 years ago
Alternatives and similar repositories for vit-attention-benchmark
Users that are interested in vit-attention-benchmark are comparing it to the libraries listed below
Sorting:
- Paper List for In-context Learning π·β20Updated 2 years ago
- [ECCV 2024] This is the official implementation of "Stitched ViTs are Flexible Vision Backbones".β27Updated last year
- (CVPR 2022) Automated Progressive Learning for Efficient Training of Vision Transformersβ25Updated 3 months ago
- Anytime Dense Prediction with Confidence Adaptivity (ICLR 2022)β50Updated 9 months ago
- [CVPR 2024] DiffAgent: Fast and Accurate Text-to-Image API Selection with Large Language Modelβ17Updated last year
- [CVPR2023] This is an official implementation of paper "DETRs with Hybrid Matching".β14Updated 2 years ago
- β38Updated last year
- Accelerating Vision-Language Pretraining with Free Language Modeling (CVPR 2023)β32Updated 2 years ago
- BESA is a differentiable weight pruning technique for large language models.β16Updated last year
- Beyond Masking: Demystifying Token-Based Pre-Training for Vision Transformersβ26Updated 3 years ago
- code base for vision transformersβ36Updated 3 years ago
- Official code for the paper, "TaCA: Upgrading Your Visual Foundation Model with Task-agnostic Compatible Adapter".β16Updated last year
- [CVPR2022 Oral] The official code for "TransRank: Self-supervised Video Representation Learning via Ranking-based Transformation Recognitβ¦β18Updated 2 years ago
- Official Pytorch implementation for Distilling Image Classifiers in Object detection (NeurIPS2021)β31Updated 3 years ago
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Pruninβ¦β40Updated 2 years ago
- Lightweight Transformer for Multi-modal Tasksβ16Updated 2 years ago
- LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)β18Updated 2 years ago
- Official code for "Dynamic Token Normalization Improves Vision Transformer", ICLR 2022.β28Updated 3 years ago
- Teach-DETR: Better Training DETR with Teachersβ31Updated last year
- [CVPR 2022] "The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy" by Tianlong Cβ¦β25Updated 3 years ago
- Contrastive Object-level Pre-training with Spatial Noise Curriculum Learningβ20Updated 3 years ago
- [Arxiv2022] Revitalize Region Feature for Democratizing Video-Language Pre-trainingβ21Updated 3 years ago
- SOIT: Segmenting Objects with Instance-Aware Transformersβ14Updated 2 years ago
- β57Updated 3 years ago
- Code and models for the paper Glance-and-Gaze Vision Transformerβ28Updated 3 years ago
- Code for Point-Level Regin Contrast (https//arxiv.org/abs/2202.04639)β35Updated 2 years ago
- Official implement of Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformerβ72Updated 2 years ago
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effectβ¦β35Updated 11 months ago
- A Simple Framwork for CV Pre-training Model (SOCO, VirTex, BEiT)β15Updated 3 years ago
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.β33Updated last year