nebius / kvax
A FlashAttention implementation for JAX with support for efficient document mask computation and context parallelism.
☆112Updated last month
Alternatives and similar repositories for kvax
Users that are interested in kvax are comparing it to the libraries listed below
Sorting:
- ☆109Updated this week
- A simple library for scaling up JAX programs☆134Updated 6 months ago
- Minimal but scalable implementation of large language models in JAX☆34Updated 6 months ago
- FlashRNN - Fast RNN Kernels with I/O Awareness☆86Updated last month
- supporting pytorch FSDP for optimizers☆80Updated 5 months ago
- Efficient optimizers☆193Updated this week
- Experiment of using Tangent to autodiff triton☆78Updated last year
- JAX implementation of the Mistral 7b v0.2 model☆35Updated 10 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆120Updated last week
- Distributed pretraining of large language models (LLMs) on cloud TPU slices, with Jax and Equinox.☆24Updated 7 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆44Updated this week
- LoRA for arbitrary JAX models and functions☆136Updated last year
- ☆217Updated 10 months ago
- JMP is a Mixed Precision library for JAX.☆198Updated 3 months ago
- ☆79Updated 10 months ago
- ☆71Updated 8 months ago
- ☆14Updated 10 months ago
- An implementation of PSGD Kron second-order optimizer for PyTorch☆91Updated last month
- Accelerated First Order Parallel Associative Scan☆182Updated 8 months ago
- JAX bindings for Flash Attention v2☆88Updated 10 months ago
- Einsum-like high-level array sharding API for JAX☆34Updated 10 months ago
- Two implementations of ZeRO-1 optimizer sharding in JAX☆14Updated last year
- seqax = sequence modeling + JAX☆155Updated last month
- Machine Learning eXperiment Utilities☆46Updated 11 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆65Updated 3 weeks ago
- minGPT in JAX☆48Updated 3 years ago
- PyTorch per step fault tolerance (actively under development)☆300Updated this week
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated this week
- 🧱 Modula software package☆189Updated last month
- Official implementation of the paper "Linear Transformers with Learnable Kernel Functions are Better In-Context Models"☆159Updated 4 months ago