zinccat / flaxattentionLinks
☆25Updated last year
Alternatives and similar repositories for flaxattention
Users that are interested in flaxattention are comparing it to the libraries listed below
Sorting:
- Accelerated First Order Parallel Associative Scan☆193Updated last week
- 🧱 Modula software package☆316Updated 4 months ago
- supporting pytorch FSDP for optimizers☆84Updated last year
- JAX bindings for Flash Attention v2☆102Updated this week
- Understand and test language model architectures on synthetic tasks.☆247Updated 3 months ago
- JAX Synergistic Memory Inspector☆183Updated last year
- seqax = sequence modeling + JAX☆169Updated 5 months ago
- If it quacks like a tensor...☆59Updated last year
- ☆287Updated last year
- LoRA for arbitrary JAX models and functions☆143Updated last year
- Minimal yet performant LLM examples in pure JAX☆223Updated 3 weeks ago
- A set of Python scripts that makes your experience on TPU better☆55Updated 3 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆181Updated 6 months ago
- A library for unit scaling in PyTorch☆133Updated 5 months ago
- ☆92Updated last year
- A simple library for scaling up JAX programs☆144Updated last month
- Experiment of using Tangent to autodiff triton☆81Updated last year
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆32Updated 6 months ago
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆328Updated this week
- JAX implementation of the Mistral 7b v0.2 model☆35Updated last year
- Code implementing "Efficient Parallelization of a Ubiquitious Sequential Computation" (Heinsen, 2023)☆98Updated last year
- Named Tensors for Legible Deep Learning in JAX☆215Updated last month
- JAX implementation of the Llama 2 model☆216Updated last year
- This is a port of Mistral-7B model in JAX☆32Updated last year
- Jax/Flax rewrite of Karpathy's nanoGPT☆62Updated 2 years ago
- Implementation of Flash Attention in Jax☆222Updated last year
- A flexible and efficient implementation of Flash Attention 2.0 for JAX, supporting multiple backends (GPU/TPU/CPU) and platforms (Triton/…☆33Updated 9 months ago
- Efficient optimizers☆279Updated last week
- MoE training for Me and You and maybe other people☆298Updated 2 weeks ago
- Minimal but scalable implementation of large language models in JAX☆35Updated last month