zinccat / flaxattentionLinks
☆27Updated last year
Alternatives and similar repositories for flaxattention
Users that are interested in flaxattention are comparing it to the libraries listed below
Sorting:
- supporting pytorch FSDP for optimizers☆84Updated last year
- A set of Python scripts that makes your experience on TPU better☆56Updated 4 months ago
- 🧱 Modula software package☆322Updated 5 months ago
- Accelerated First Order Parallel Associative Scan☆196Updated 3 weeks ago
- ☆289Updated last year
- If it quacks like a tensor...☆59Updated last year
- Understand and test language model architectures on synthetic tasks.☆251Updated 3 weeks ago
- JAX Synergistic Memory Inspector☆184Updated last year
- A library for unit scaling in PyTorch☆133Updated 6 months ago
- JAX implementation of the Llama 2 model☆216Updated 2 years ago
- Efficient optimizers☆281Updated last month
- JAX bindings for Flash Attention v2☆103Updated last month
- seqax = sequence modeling + JAX☆170Updated 6 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆186Updated last week
- ☆92Updated last year
- LoRA for arbitrary JAX models and functions☆144Updated last year
- Implementation of Flash Attention in Jax☆225Updated last year
- Minimal yet performant LLM examples in pure JAX☆236Updated 2 weeks ago
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆334Updated 3 weeks ago
- A simple library for scaling up JAX programs☆144Updated 2 months ago
- jax-triton contains integrations between JAX and OpenAI Triton☆437Updated last month
- Experiment of using Tangent to autodiff triton☆82Updated 2 years ago
- Jax/Flax rewrite of Karpathy's nanoGPT☆63Updated 2 years ago
- ☆124Updated last year
- A flexible and efficient implementation of Flash Attention 2.0 for JAX, supporting multiple backends (GPU/TPU/CPU) and platforms (Triton/…☆34Updated 10 months ago
- Distributed pretraining of large language models (LLMs) on cloud TPU slices, with Jax and Equinox.☆24Updated last year
- Minimal but scalable implementation of large language models in JAX☆35Updated 2 months ago
- Fast, Modern, and Low Precision PyTorch Optimizers☆121Updated last month
- A FlashAttention implementation for JAX with support for efficient document mask computation and context parallelism.☆157Updated 2 months ago
- Named Tensors for Legible Deep Learning in JAX☆218Updated 2 months ago