nshepperd / flash_attn_jax
JAX bindings for Flash Attention v2
☆79Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for flash_attn_jax
- Accelerated First Order Parallel Associative Scan☆163Updated 3 months ago
- seqax = sequence modeling + JAX☆133Updated 4 months ago
- ☆132Updated last year
- A simple library for scaling up JAX programs☆127Updated 2 weeks ago
- Experiment of using Tangent to autodiff triton☆72Updated 9 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆112Updated 2 months ago
- ☆73Updated 4 months ago
- LoRA for arbitrary JAX models and functions☆132Updated 8 months ago
- Minimal but scalable implementation of large language models in JAX☆26Updated 2 weeks ago
- Some preliminary explorations of Mamba's context scaling.☆191Updated 9 months ago
- ☆45Updated 9 months ago
- Griffin MQA + Hawk Linear RNN Hybrid☆85Updated 6 months ago
- ☆74Updated 11 months ago
- A set of Python scripts that makes your experience on TPU better☆40Updated 4 months ago
- ☆36Updated 10 months ago
- ☆53Updated 10 months ago
- A library for unit scaling in PyTorch☆105Updated 2 weeks ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆113Updated 7 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆185Updated last month
- Understand and test language model architectures on synthetic tasks.☆162Updated 6 months ago
- ☆77Updated 5 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆61Updated 7 months ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆43Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆95Updated 6 months ago
- Machine Learning eXperiment Utilities☆45Updated 5 months ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆66Updated 5 months ago
- ☆50Updated 6 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆214Updated 3 months ago
- If it quacks like a tensor...☆52Updated last week
- Here we will test various linear attention designs.☆56Updated 6 months ago