lucidrains / rela-transformer
Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012
☆49Updated 3 years ago
Alternatives and similar repositories for rela-transformer
Users that are interested in rela-transformer are comparing it to the libraries listed below
Sorting:
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆35Updated 3 years ago
- codebase for the SIMAT dataset and evaluation☆39Updated 3 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆49Updated 3 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆73Updated 2 years ago
- Local Attention - Flax module for Jax☆21Updated 3 years ago
- ImageNet-12k subset of ImageNet-21k (fall11)☆21Updated last year
- An open source implementation of CLIP.☆32Updated 2 years ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆50Updated 3 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆60Updated 3 years ago
- Another attempt at a long-context / efficient transformer by me☆37Updated 3 years ago
- ☆13Updated 3 years ago
- ☆29Updated 2 years ago
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorch☆37Updated 3 years ago
- A simple implementation of a deep linear Pytorch module☆20Updated 4 years ago
- Implementation of Multistream Transformers in Pytorch☆53Updated 3 years ago
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆54Updated 2 years ago
- Unofficially Implements https://arxiv.org/abs/2112.05682 to get Linear Memory Cost on Attention for PyTorch☆12Updated 3 years ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- ☆21Updated 2 years ago
- A dashboard for exploring timm learning rate schedulers☆19Updated 5 months ago
- A simple Transformer where the softmax has been replaced with normalization☆19Updated 4 years ago
- A GPT, made only of MLPs, in Jax☆57Updated 3 years ago
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆57Updated last year
- Axial Positional Embedding for Pytorch☆79Updated 2 months ago
- PyTorch implementation of GLOM☆22Updated 3 years ago
- Implementation for ACProp ( Momentum centering and asynchronous update for adaptive gradient methdos, NeurIPS 2021)☆15Updated 3 years ago
- Contains my experiments with the `big_vision` repo to train ViTs on ImageNet-1k.☆22Updated 2 years ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated last year
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆118Updated 3 years ago