xl402 / performerLinks
Tensorflow implementation of a linear attention architecture
☆44Updated 4 years ago
Alternatives and similar repositories for performer
Users that are interested in performer are comparing it to the libraries listed below
Sorting:
- Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."☆134Updated 3 years ago
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆163Updated last year
- Implements MLP-Mixer (https://arxiv.org/abs/2105.01601) with the CIFAR-10 dataset.☆57Updated 3 years ago
- Simple stochastic weight averaging callback for Keras☆62Updated 3 years ago
- Implementation of Feedback Transformer in Pytorch☆107Updated 4 years ago
- Unofficial PyTorch implementation of Attention Free Transformer (AFT) layers by Apple Inc.☆239Updated 3 years ago
- Implementation of ETSformer, state of the art time-series Transformer, in Pytorch☆153Updated last year
- Implements sharpness-aware minimization (https://arxiv.org/abs/2010.01412) in TensorFlow 2.☆60Updated 3 years ago
- Adaptive Gradient Clipping☆137Updated 2 years ago
- Efficient Transformers for research, PyTorch and Tensorflow using Locality Sensitive Hashing☆95Updated 5 years ago
- Cyclemoid implementation for PyTorch☆90Updated 3 years ago
- State of the art faster Transformer with Tensorflow 2.0 ( NLP, Computer Vision, Audio ).☆85Updated 2 years ago
- Implementation of Fast Transformer in Pytorch☆175Updated 3 years ago
- Implementation of self-supervised image-level contrastive pretraining methods using Keras.☆70Updated 3 years ago
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorch☆227Updated 2 years ago
- Python implementation of GLN in different frameworks