xl402 / performerLinks
Tensorflow implementation of a linear attention architecture
☆44Updated 4 years ago
Alternatives and similar repositories for performer
Users that are interested in performer are comparing it to the libraries listed below
Sorting:
- Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."☆132Updated 4 years ago
- Implements sharpness-aware minimization (https://arxiv.org/abs/2010.01412) in TensorFlow 2.☆61Updated 4 years ago
- Implementation of Fast Transformer in Pytorch☆177Updated 4 years ago
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆167Updated last year
- Implementation of Feedback Transformer in Pytorch☆108Updated 4 years ago
- Implementation of ETSformer, state of the art time-series Transformer, in Pytorch☆156Updated 2 years ago
- Cyclemoid implementation for PyTorch☆90Updated 3 years ago
- Implementation of self-supervised image-level contrastive pretraining methods using Keras.☆70Updated 4 years ago
- Unofficial PyTorch implementation of Attention Free Transformer (AFT) layers by Apple Inc.☆244Updated 3 years ago
- State of the art faster Transformer with Tensorflow 2.0 ( NLP, Computer Vision, Audio ).☆85Updated 2 years ago
- Implements MLP-Mixer (https://arxiv.org/abs/2105.01601) with the CIFAR-10 dataset.☆59Updated 3 years ago
- Simple stochastic weight averaging callback for Keras☆63Updated 4 years ago
- An implementation of Additive Attention☆150Updated 3 years ago
- ☆164Updated 2 years ago
- Efficient Transformers for research, PyTorch and Tensorflow using Locality Sensitive Hashing☆95Updated 5 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Updated 4 years ago
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆206Updated 2 years ago
- Axial Positional Embedding for Pytorch☆84Updated 10 months ago
- ☆54Updated 5 years ago
- Simply Numpy implementation of the FAVOR+ attention mechanism, https://teddykoker.com/2020/11/performers/☆38Updated 5 years ago
- TF 2.x and PyTorch Lightning Callbacks for GPU monitoring☆92Updated 5 years ago
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorch☆235Updated 2 years ago
- A repository containing the code for the Bistable Recurrent Cell☆47Updated 5 years ago
- Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms☆260Updated 4 years ago
- Implementation of Nyström Self-attention, from the paper Nyströmformer☆145Updated 9 months ago
- HMMs in PyTorch☆142Updated 4 years ago
- Minimal implementation of adaptive gradient clipping (https://arxiv.org/abs/2102.06171) in TensorFlow 2.☆85Updated 4 years ago
- Implementation of modern data augmentation techniques in TensorFlow 2.x to be used in your training pipeline.☆34Updated 5 years ago
- A smoother activation function (undergrad code)☆115Updated 5 years ago
- Python implementation of GLN in different frameworks☆96Updated 5 years ago