pkuzengqi / Skyformer
Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)
☆58Updated 2 years ago
Related projects: ⓘ
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆42Updated last year
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆106Updated 3 years ago
- Code for the PAPA paper☆27Updated last year
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 2 years ago
- Code for the paper PermuteFormer☆43Updated 2 years ago
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆45Updated 4 years ago
- Parameter Efficient Transfer Learning with Diff Pruning☆70Updated 3 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆45Updated 3 years ago
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorch☆35Updated 2 years ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆66Updated last year
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆32Updated 2 years ago
- ☆15Updated 3 years ago
- No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models (ICLR 2022)☆29Updated 2 years ago
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆32Updated last year
- ☆32Updated 3 years ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆68Updated last year
- ☆42Updated 4 years ago
- Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).☆54Updated 2 years ago
- ☆18Updated 3 years ago
- ☆30Updated 8 months ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆94Updated last year
- Code for EMNLP 2020 paper CoDIR☆41Updated last year
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆47Updated 2 years ago
- ☆48Updated last year
- Implementation of Multistream Transformers in Pytorch☆54Updated 3 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆116Updated 3 years ago
- Block Sparse movement pruning☆77Updated 3 years ago
- This package implements THOR: Transformer with Stochastic Experts.☆60Updated 2 years ago
- ☆63Updated 2 years ago
- Efficient Transformers with Dynamic Token Pooling☆51Updated last year