lucidrains / rela-transformer
Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012
☆49Updated 2 years ago
Alternatives and similar repositories for rela-transformer:
Users that are interested in rela-transformer are comparing it to the libraries listed below
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆35Updated 3 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆72Updated 2 years ago
- codebase for the SIMAT dataset and evaluation☆38Updated 2 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆47Updated 3 years ago
- ImageNet-12k subset of ImageNet-21k (fall11)☆21Updated last year
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆60Updated 2 years ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- Local Attention - Flax module for Jax