pkuzengqi / Skyformer
Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)
☆60Updated 3 years ago
Alternatives and similar repositories for Skyformer:
Users that are interested in Skyformer are comparing it to the libraries listed below
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models (ICLR 2022)☆30Updated 3 years ago
- ☆15Updated 3 years ago
- ☆33Updated 4 years ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 2 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆73Updated 2 years ago
- Efficient Transformers with Dynamic Token Pooling☆60Updated last year
- FairSeq repo with Apollo optimizer☆114Updated last year
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆45Updated 4 years ago
- ☆44Updated 4 years ago
- This package implements THOR: Transformer with Stochastic Experts.☆61Updated 3 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆99Updated 2 years ago
- Parameter Efficient Transfer Learning with Diff Pruning☆73Updated 4 years ago
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆32Updated last year
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆115Updated 4 years ago
- Code for the PAPA paper☆27Updated 2 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆101Updated 4 years ago
- Code for the paper PermuteFormer☆42Updated 3 years ago
- GPT, but made only out of MLPs☆88Updated 3 years ago
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆48Updated 4 years ago
- ☆32Updated last year
- ☆96Updated last year
- Code for the paper "Query-Key Normalization for Transformers"☆39Updated 4 years ago
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆34Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆51Updated 2 years ago
- Code for EMNLP 2020 paper CoDIR☆41Updated 2 years ago
- A package for fine tuning of pretrained NLP transformers using Semi Supervised Learning☆15Updated 3 years ago
- ☆19Updated 3 years ago
- ☆21Updated 2 years ago
- ☆29Updated 2 years ago