lucidrains / flash-attention-jaxLinks
Implementation of Flash Attention in Jax
☆212Updated last year
Alternatives and similar repositories for flash-attention-jax
Users that are interested in flash-attention-jax are comparing it to the libraries listed below
Sorting:
- jax-triton contains integrations between JAX and OpenAI Triton☆392Updated this week
- Implementation of a Transformer, but completely in Triton☆265Updated 3 years ago
- LoRA for arbitrary JAX models and functions☆135Updated last year
- JMP is a Mixed Precision library for JAX.☆199Updated 4 months ago
- JAX Synergistic Memory Inspector☆172Updated 10 months ago
- JAX implementation of the Llama 2 model☆217Updated last year
- ☆186Updated last week
- Train very large language models in Jax.☆204Updated last year
- A library for unit scaling in PyTorch☆125Updated 6 months ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆187Updated 2 years ago
- Inference code for LLaMA models in JAX☆117Updated last year
- A simple library for scaling up JAX programs☆136Updated 7 months ago
- This repository contains the experimental PyTorch native float8 training UX☆222Updated 10 months ago
- Accelerated First Order Parallel Associative Scan☆180Updated 9 months ago
- ☆228Updated 3 months ago
- ☆59Updated 3 years ago
- Experiment of using Tangent to autodiff triton☆78Updated last year
- ☆352Updated last year
- ☆309Updated last week
- seqax = sequence modeling + JAX☆155Updated last month
- JAX-Toolbox☆308Updated this week
- A user-friendly tool chain that enables the seamless execution of ONNX models using JAX as the backend.☆110Updated 3 weeks ago
- Named tensors with first-class dimensions for PyTorch☆329Updated last year
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆213Updated 2 years ago
- Torch Distributed Experimental☆117Updated 9 months ago
- JAX bindings for Flash Attention v2☆88Updated 10 months ago
- ☆267Updated 10 months ago
- ☆116Updated last week
- ☆67Updated 2 years ago
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆239Updated 2 years ago