karttikeya / minREVLinks
A simple minimal implementation of Reversible Vision Transformers
☆126Updated last year
Alternatives and similar repositories for minREV
Users that are interested in minREV are comparing it to the libraries listed below
Sorting:
- Code and models for the paper "The effectiveness of MAE pre-pretraining for billion-scale pretraining" https://arxiv.org/abs/2303.13496☆91Updated 7 months ago
- A compilation of network architectures for vision and others without usage of self-attention mechanism☆81Updated 2 years ago
- An official code release of the paper RGB no more: Minimally Decoded JPEG Vision Transformers☆56Updated 2 years ago
- ☆54Updated 2 years ago
- [ICLR 2023 Spotlight] GPViT: A High Resolution Non-Hierarchical Vision Transformer with Group Propagation☆101Updated 2 years ago
- ☆186Updated last year
- A Close Look at Spatial Modeling: From Attention to Convolution☆91Updated 2 years ago
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆102Updated 2 years ago
- [CVPR2022 - Oral] Official Jax Implementation of Learned Queries for Efficient Local Attention☆118Updated 3 years ago
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated last year
- A better PyTorch implementation of image local attention which reduces the GPU memory by an order of magnitude.☆141Updated 3 years ago
- [ICLR 2022] "Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice" by Peihao Wang, Wen…☆81Updated last year
- [ICLR 2022] "As-ViT: Auto-scaling Vision Transformers without Training" by Wuyang Chen, Wei Huang, Xianzhi Du, Xiaodan Song, Zhangyang Wa…☆76Updated 3 years ago
- [WACV2025 Oral] DeepMIM: Deep Supervision for Masked Image Modeling☆54Updated 6 months ago
- This is a offical PyTorch/GPU implementation of SupMAE.☆79Updated 3 years ago
- This is the official PyTorch implementation for "Mesa: A Memory-saving Training Framework for Transformers".☆121Updated 3 years ago
- PyTorch implementation of R-MAE https//arxiv.org/abs/2306.05411☆113Updated 2 years ago
- Implementation of Uniformer, a simple attention and 3d convolutional net that achieved SOTA in a number of video classification tasks, de…☆102Updated 3 years ago
- Transformers w/o Attention, based fully on MLPs☆95Updated last year
- Official code for ICCV 2023 paper "Convolutional Networks with Oriented 1D Kernels"☆47Updated last year
- open source the research work for published on arxiv. https://arxiv.org/abs/2106.02689☆53Updated 3 years ago
- [NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"☆74Updated 3 years ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆216Updated 2 years ago
- code release of research paper "Exploring Long-Sequence Masked Autoencoders"☆100Updated 3 years ago
- Code repository for the ICLR 2022 paper "FlexConv: Continuous Kernel Convolutions With Differentiable Kernel Sizes" https://openreview.ne…☆116Updated 2 years ago
- [ICME 2022] code for the paper, SimVit: Exploring a simple vision transformer with sliding windows.☆68Updated 3 years ago
- [CVPR 2025] Official PyTorch implementation of MaskSub "Masking meets Supervision: A Strong Learning Alliance"☆45Updated 7 months ago
- Official PyTorch implementation of A-ViT: Adaptive Tokens for Efficient Vision Transformer (CVPR 2022)☆164Updated 3 years ago
- PyTorch reimplementation of FlexiViT: One Model for All Patch Sizes☆63Updated last year
- A Contrastive Learning Boost from Intermediate Pre-Trained Representations☆42Updated last year