xyltt / Linear-TransformerLinks
Transformer are RNNs: Fast Autoregressive Transformer with Linear Attention
☆23Updated 4 years ago
Alternatives and similar repositories for Linear-Transformer
Users that are interested in Linear-Transformer are comparing it to the libraries listed below
Sorting:
- code for Explicit Sparse Transformer☆61Updated 2 years ago
- [ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention☆196Updated 2 years ago
- Implementation of AAAI 2022 Paper: Go wider instead of deeper☆32Updated 2 years ago
- ☆64Updated 4 years ago
- ☆33Updated 4 years ago
- Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"☆369Updated last year
- Recent Advances in MLP-based Models (MLP is all you need!)☆116Updated 2 years ago
- An implementation of the efficient attention module.☆321Updated 4 years ago
- ☆197Updated last year
- PyTorch implementation of Pay Attention to MLPs☆40Updated 4 years ago
- Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)☆75Updated 5 years ago
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆98Updated 2 years ago
- [ICLR 2022] "Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice" by Peihao Wang, Wen…☆81Updated last year
- ☆27Updated 3 years ago
- BM-NAS: Bilevel Multimodal Neural Architecture Search (AAAI 2022 Oral)☆19Updated 2 years ago
- Unofficial Implementation of MLP-Mixer, gMLP, resMLP, Vision Permutator, S2MLP, S2MLPv2, RaftMLP, HireMLP, ConvMLP, AS-MLP, SparseMLP, Co…☆169Updated 3 years ago
- [ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Li…☆54Updated last year
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆125Updated last year
- Source code for our AAAI'22 paper 《From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression》☆25Updated 3 years ago
- Mixture of Attention Heads☆49Updated 2 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆86Updated 2 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 5 years ago
- Sparse Attention with Linear Units☆19Updated 4 years ago
- Learning Efficient Vision Transformers via Fine-Grained Manifold Distillation. NeurIPS 2022.☆32Updated 2 years ago
- iFormer: Inception Transformer☆246Updated 2 years ago
- S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration (CVPR 2021)☆64Updated 4 years ago
- CVPR2022, BatchFormer: Learning to Explore Sample Relationships for Robust Representation Learning, https://arxiv.org/abs/2203.01522☆251Updated 2 years ago
- Multi-head attention in PyTorch☆153Updated 6 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆60Updated 5 years ago
- ☆151Updated last year