nebius / kvaxLinks
A FlashAttention implementation for JAX with support for efficient document mask computation and context parallelism.
☆139Updated 5 months ago
Alternatives and similar repositories for kvax
Users that are interested in kvax are comparing it to the libraries listed below
Sorting:
- Minimal yet performant LLM examples in pure JAX☆158Updated last week
- JAX-Toolbox☆337Updated this week
- A simple library for scaling up JAX programs☆143Updated 10 months ago
- seqax = sequence modeling + JAX☆167Updated last month
- A JAX-native LLM Post-Training Library☆144Updated this week
- Minimal but scalable implementation of large language models in JAX☆35Updated 2 weeks ago
- jax-triton contains integrations between JAX and OpenAI Triton☆423Updated 2 weeks ago
- JAX bindings for Flash Attention v2☆91Updated last week
- ☆281Updated last year
- Experiment of using Tangent to autodiff triton☆81Updated last year
- Write a fast kernel and run it on Discord. See how you compare against the best!☆57Updated this week
- Tokamax: A GPU and TPU kernel library.☆41Updated this week
- Accelerated First Order Parallel Associative Scan☆188Updated last year
- ☆67Updated 10 months ago
- JAX implementation of the Mistral 7b v0.2 model☆36Updated last year
- Dion optimizer algorithm☆343Updated 2 weeks ago
- ☆88Updated last year
- Implementation of Flash Attention in Jax☆217Updated last year
- PyTorch Single Controller☆419Updated this week
- Attention Kernels for Symmetric Power Transformers☆116Updated 3 weeks ago
- JMP is a Mixed Precision library for JAX.☆208Updated 7 months ago
- LoRA for arbitrary JAX models and functions☆142Updated last year
- Distributed pretraining of large language models (LLMs) on cloud TPU slices, with Jax and Equinox.☆24Updated 11 months ago
- 🧱 Modula software package☆237Updated last month
- Einsum-like high-level array sharding API for JAX☆35Updated last year
- ☆22Updated 10 months ago
- Load compute kernels from the Hub☆283Updated this week
- Implementation of Diffusion Transformer (DiT) in JAX☆291Updated last year
- A flexible and efficient implementation of Flash Attention 2.0 for JAX, supporting multiple backends (GPU/TPU/CPU) and platforms (Triton/…☆27Updated 6 months ago
- supporting pytorch FSDP for optimizers☆84Updated 9 months ago