lucidrains / flash-attention-jax
Implementation of Flash Attention in Jax
☆206Updated last year
Alternatives and similar repositories for flash-attention-jax:
Users that are interested in flash-attention-jax are comparing it to the libraries listed below
- jax-triton contains integrations between JAX and OpenAI Triton☆386Updated last week
- Implementation of a Transformer, but completely in Triton☆260Updated 2 years ago
- JMP is a Mixed Precision library for JAX.☆193Updated last month
- ☆184Updated last month
- JAX Synergistic Memory Inspector☆171Updated 8 months ago
- LoRA for arbitrary JAX models and functions☆135Updated last year
- JAX-Toolbox☆289Updated this week
- JAX implementation of the Llama 2 model☆216Updated last year
- JAX bindings for Flash Attention v2☆88Updated 8 months ago
- A library for unit scaling in PyTorch☆124Updated 3 months ago
- Inference code for LLaMA models in JAX☆116Updated 10 months ago
- ☆290Updated this week
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆187Updated 2 years ago
- Accelerated First Order Parallel Associative Scan☆177Updated 7 months ago
- ☆343Updated 11 months ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆211Updated 2 years ago
- A user-friendly tool chain that enables the seamless execution of ONNX models using JAX as the backend.☆109Updated 3 weeks ago
- This repository contains the experimental PyTorch native float8 training UX☆222Updated 7 months ago
- Train very large language models in Jax.☆203Updated last year
- A simple library for scaling up JAX programs☆134Updated 4 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆557Updated this week
- Orbax provides common checkpointing and persistence utilities for JAX users☆349Updated this week
- ☆67Updated 2 years ago
- Run PyTorch in JAX. 🤝☆232Updated last month
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆372Updated last year
- A stand-alone implementation of several NumPy dtype extensions used in machine learning.☆255Updated last week
- ☆214Updated 8 months ago
- ☆220Updated last month
- Experiment of using Tangent to autodiff triton☆78Updated last year
- ☆95Updated 9 months ago