zinccat / flaxattentionLinks
☆21Updated 10 months ago
Alternatives and similar repositories for flaxattention
Users that are interested in flaxattention are comparing it to the libraries listed below
Sorting:
- supporting pytorch FSDP for optimizers☆84Updated 8 months ago
- Accelerated First Order Parallel Associative Scan☆187Updated last year
- seqax = sequence modeling + JAX☆166Updated last month
- Fast, Modern, and Low Precision PyTorch Optimizers☆109Updated last week
- ☆275Updated last year
- 🧱 Modula software package☆231Updated 2 weeks ago
- Understand and test language model architectures on synthetic tasks.☆222Updated last month
- ☆87Updated last year
- JAX bindings for Flash Attention v2☆90Updated 2 weeks ago
- If it quacks like a tensor...☆59Updated 9 months ago
- LoRA for arbitrary JAX models and functions☆142Updated last year
- A set of Python scripts that makes your experience on TPU better☆54Updated last year
- Minimal yet performant LLM examples in pure JAX☆150Updated last week
- Experiment of using Tangent to autodiff triton☆80Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆156Updated 2 months ago
- JAX Synergistic Memory Inspector☆179Updated last year
- A JAX-native LLM Post-Training Library☆128Updated this week
- A library for unit scaling in PyTorch☆130Updated last month
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆32Updated 2 months ago
- Efficient optimizers☆259Updated last month
- JAX implementation of the Llama 2 model☆219Updated last year
- Custom triton kernels for training Karpathy's nanoGPT.☆19Updated 10 months ago
- ☆118Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆127Updated 8 months ago
- Two implementations of ZeRO-1 optimizer sharding in JAX☆14Updated 2 years ago
- Distributed pretraining of large language models (LLMs) on cloud TPU slices, with Jax and Equinox.☆24Updated 11 months ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆70Updated last year
- Inference code for LLaMA models in JAX☆118Updated last year
- Implementation of Flash Attention in Jax☆216Updated last year
- A simple library for scaling up JAX programs☆143Updated 10 months ago