rishikksh20 / rectified-linear-attentionLinks
Sparse Attention with Linear Units
☆20Updated 4 years ago
Alternatives and similar repositories for rectified-linear-attention
Users that are interested in rectified-linear-attention are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of Pay Attention to MLPs☆40Updated 4 years ago
- [ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention☆198Updated 3 years ago
- ☆33Updated 4 years ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆64Updated 2 years ago
- ☆14Updated 3 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆87Updated 2 years ago
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 3 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆71Updated 5 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆59Updated 5 years ago
- [NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"☆73Updated 3 years ago
- Unofficial PyTorch implementation of the paper "cosFormer: Rethinking Softmax In Attention".☆44Updated 4 years ago
- A repository for DenseSSMs☆88Updated last year
- Implementation of Mogrifier LSTM in PyTorch☆34Updated 5 years ago
- Mixture of Attention Heads☆51Updated 3 years ago
- A Tight-fisted Optimizer☆50Updated 2 years ago
- The accompanying code for "Memory-efficient Transformers via Top-k Attention" (Ankit Gupta, Guy Dar, Shaya Goodman, David Ciprut, Jonatha…☆70Updated 4 years ago
- code for Explicit Sparse Transformer☆61Updated 2 years ago
- For paper《Gaussian Transformer: A Lightweight Approach for Natural Language Inference》☆28Updated 5 years ago
- DropIT: Dropping Intermediate Tensors for Memory-Efficient DNN Training (ICLR 2023)☆32Updated 2 years ago
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆33Updated 2 years ago
- [ICLR 2022] Code for paper "Exploring Extreme Parameter Compression for Pre-trained Language Models"(https://arxiv.org/abs/2205.10036)☆22Updated 2 years ago
- Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)☆75Updated 5 years ago
- [ICLR 2023] Official implementation of Transnormer in our ICLR 2023 paper - Toeplitz Neural Network for Sequence Modeling☆81Updated last year
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆31Updated last year
- ☆106Updated last year
- Official implementation for paper "Relational Surrogate Loss Learning", ICLR 2022☆37Updated 3 years ago
- [ICML 2023] "Data Efficient Neural Scaling Law via Model Reusing" by Peihao Wang, Rameswar Panda, Zhangyang Wang☆14Updated 2 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆58Updated 4 years ago
- Transformer are RNNs: Fast Autoregressive Transformer with Linear Attention☆24Updated 5 years ago
- ☆65Updated 5 years ago