AminRezaei0x443 / memory-efficient-attentionLinks
Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch
☆184Updated 2 years ago
Alternatives and similar repositories for memory-efficient-attention
Users that are interested in memory-efficient-attention are comparing it to the libraries listed below
Sorting:
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆214Updated 2 years ago
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆236Updated 2 years ago
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆204Updated last year
- Implementation of the Adan (ADAptive Nesterov momentum algorithm) Optimizer in Pytorch☆252Updated 2 years ago
- Simple and efficient RevNet-Library for PyTorch with XLA and DeepSpeed support and parameter offload☆128Updated 2 years ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆379Updated 2 years ago
- Implementation of a Transformer, but completely in Triton☆273Updated 3 years ago
- Implementation of Fast Transformer in Pytorch☆175Updated 3 years ago
- Implementation of Flash Attention in Jax☆215Updated last year
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆187Updated 3 years ago
- Sequence modeling with Mega.☆296Updated 2 years ago
- Implementation of Nyström Self-attention, from the paper Nyströmformer☆137Updated 4 months ago
- ☆74Updated 2 years ago
- Another attempt at a long-context / efficient transformer by me☆38Updated 3 years ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 3 years ago
- Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI☆91Updated 3 years ago
- ☆208Updated 2 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆225Updated 3 years ago
- Axial Positional Embedding for Pytorch☆83Updated 5 months ago
- TF/Keras code for DiffStride, a pooling layer with learnable strides.☆124Updated 3 years ago
- A GPT, made only of MLPs, in Jax☆58Updated 4 years ago
- ☆163Updated 2 years ago
- A small demonstration of using WebDataset with ImageNet and PyTorch Lightning☆74Updated last year
- Named tensors with first-class dimensions for PyTorch☆332Updated 2 years ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆228Updated 10 months ago
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆81Updated 3 years ago
- EfficientNet, MobileNetV3, MobileNetV2, MixNet, etc in JAX w/ Flax Linen and Objax☆128Updated last year
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆119Updated 4 years ago
- ☆378Updated last year
- Implementation of Feedback Transformer in Pytorch☆107Updated 4 years ago