AminRezaei0x443 / memory-efficient-attention
Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch
☆178Updated last year
Related projects: ⓘ
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆355Updated last year
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆204Updated last year
- Implementation of a Transformer, but completely in Triton☆242Updated 2 years ago
- Implementation of Flash Attention in Jax☆188Updated 6 months ago
- Implementation of the Adan (ADAptive Nesterov momentum algorithm) Optimizer in Pytorch☆247Updated 2 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆184Updated 2 years ago
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆229Updated last year
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆203Updated last year
- Named tensors with first-class dimensions for PyTorch