zinccat / flaxattentionLinks
☆25Updated last year
Alternatives and similar repositories for flaxattention
Users that are interested in flaxattention are comparing it to the libraries listed below
Sorting:
- JAX bindings for Flash Attention v2☆98Updated 2 weeks ago
- If it quacks like a tensor...☆59Updated last year
- JAX Synergistic Memory Inspector☆181Updated last year
- 🧱 Modula software package☆303Updated 3 months ago
- Minimal yet performant LLM examples in pure JAX☆199Updated 2 months ago
- Experiment of using Tangent to autodiff triton☆80Updated last year
- JAX implementation of the Mistral 7b v0.2 model☆35Updated last year
- ☆285Updated last year
- seqax = sequence modeling + JAX☆168Updated 4 months ago
- A library for unit scaling in PyTorch☆132Updated 4 months ago
- Understand and test language model architectures on synthetic tasks.☆240Updated last month
- A set of Python scripts that makes your experience on TPU better☆54Updated 2 months ago
- Distributed pretraining of large language models (LLMs) on cloud TPU slices, with Jax and Equinox.☆24Updated last year
- ☆91Updated last year
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆321Updated this week
- A simple library for scaling up JAX programs☆144Updated 2 weeks ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆173Updated 4 months ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆32Updated 5 months ago
- supporting pytorch FSDP for optimizers☆84Updated 11 months ago
- Named Tensors for Legible Deep Learning in JAX☆211Updated 2 weeks ago
- Efficient optimizers☆275Updated last week
- Accelerated First Order Parallel Associative Scan☆192Updated last year
- JAX implementation of the Llama 2 model☆216Updated last year
- A FlashAttention implementation for JAX with support for efficient document mask computation and context parallelism.☆149Updated last week
- A MAD laboratory to improve AI architecture designs 🧪☆133Updated 11 months ago
- Implementation of Flash Attention in Jax☆221Updated last year
- A flexible and efficient implementation of Flash Attention 2.0 for JAX, supporting multiple backends (GPU/TPU/CPU) and platforms (Triton/…☆30Updated 8 months ago
- Code implementing "Efficient Parallelization of a Ubiquitious Sequential Computation" (Heinsen, 2023)☆96Updated 11 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆243Updated 5 months ago
- This is a port of Mistral-7B model in JAX☆32Updated last year