lim142857 / SparsifinerLinks
Demo code for CVPR2023 paper "Sparsifiner: Learning Sparse Instance-Dependent Attention for Efficient Vision Transformers"
☆15Updated 2 years ago
Alternatives and similar repositories for Sparsifiner
Users that are interested in Sparsifiner are comparing it to the libraries listed below
Sorting:
- [NeurIPS2023]Lightweight Vision Transformer with Bidirectional Interaction☆26Updated 2 years ago
- ☆152Updated last year
- [ICLR 2022] "Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice" by Peihao Wang, Wen…☆81Updated last year
- ☆68Updated last year
- ☆26Updated last year
- [AAAI 2022] This is the official PyTorch implementation of "Less is More: Pay Less Attention in Vision Transformers"☆97Updated 3 years ago
- [ICLR2025] This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆86Updated 5 months ago
- Unofficial Implementation of MLP-Mixer, gMLP, resMLP, Vision Permutator, S2MLP, S2MLPv2, RaftMLP, HireMLP, ConvMLP, AS-MLP, SparseMLP, Co…☆169Updated 3 years ago
- open source the research work for published on arxiv. https://arxiv.org/abs/2106.02689☆52Updated 3 years ago
- Trainable Highly-expressive Activation Functions. ECCV 2024☆38Updated 8 months ago
- (AAAI 2023 Oral) Pytorch implementation of "CF-ViT: A General Coarse-to-Fine Method for Vision Transformer"☆106Updated 2 years ago
- ☆27Updated 3 years ago
- This is the official code for paper: Token Summarisation for Efficient Vision Transformers via Graph-based Token Propagation☆31Updated last year
- State Space Models☆70Updated last year
- [CVPR 2024] VkD : Improving Knowledge Distillation using Orthogonal Projections☆56Updated last year
- Official repository of Slide-Transformer (CVPR2023)☆172Updated last year
- [NeurIPS2024 Spotlight] The official implementation of MambaTree: Tree Topology is All You Need in State Space Model☆102Updated last year
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Updated last year
- ☆68Updated last year
- ☆47Updated last year
- Ofiicial Implementation for Mamba-ND: Selective State Space Modeling for Multi-Dimensional Data☆64Updated last year
- Implementation of HAT https://arxiv.org/pdf/2204.00993☆51Updated last year
- PyTorch reimplementation of FlexiViT: One Model for All Patch Sizes☆62Updated last year
- Log-Polar Space Convolution for Convolutional Neural Networks☆12Updated 2 years ago
- PyTorch implementation of FFN : "Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains (NeurIPS2020)"☆28Updated 2 years ago
- Transformers w/o Attention, based fully on MLPs☆95Updated last year
- ☆85Updated 2 years ago
- Multi-head Recurrent Layer Attention for Vision Network☆19Updated 2 years ago
- ☆24Updated last year
- ☆75Updated 8 months ago