erfanzar / jax-flash-attn2
Flash Attention Implementation with Multiple Backend Support and Sharding This module provides a flexible implementation of Flash Attention with support for different backends (GPU, TPU, CPU) and platforms (Triton, Pallas, JAX).
☆23Updated 2 months ago
Alternatives and similar repositories for jax-flash-attn2:
Users that are interested in jax-flash-attn2 are comparing it to the libraries listed below
- Minimal but scalable implementation of large language models in JAX☆32Updated 3 months ago
- (EasyDel Former) is a utility library designed to simplify and enhance the development in JAX☆24Updated this week
- Machine Learning eXperiment Utilities☆46Updated 8 months ago
- A set of Python scripts that makes your experience on TPU better☆48Updated 7 months ago
- ☆20Updated last year
- ☆75Updated 7 months ago
- JAX bindings for Flash Attention v2☆85Updated 7 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆68Updated 10 months ago
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆226Updated this week
- ☆30Updated 2 months ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆82Updated last year
- Parallel Associative Scan for Language Models☆18Updated last year
- Triton Implementation of HyperAttention Algorithm☆46Updated last year
- ☆58Updated 2 years ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆95Updated 3 months ago
- ☆51Updated 9 months ago
- If it quacks like a tensor...☆56Updated 3 months ago
- A simple library for scaling up JAX programs☆129Updated 3 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆70Updated 3 months ago
- ☆47Updated last year
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆17Updated 2 weeks ago
- Learn CUDA with PyTorch☆16Updated 3 weeks ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated 2 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆103Updated 2 months ago
- JAX implementation of the Mistral 7b v0.2 model☆35Updated 7 months ago
- ☆42Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆81Updated 3 weeks ago
- Custom triton kernels for training Karpathy's nanoGPT.☆16Updated 4 months ago
- LoRA for arbitrary JAX models and functions☆135Updated 11 months ago
- Inference code for LLaMA models in JAX☆114Updated 9 months ago