fattorib / ZeRO-transformer
Two implementations of ZeRO-1 optimizer sharding in JAX
☆13Updated last year
Alternatives and similar repositories for ZeRO-transformer:
Users that are interested in ZeRO-transformer are comparing it to the libraries listed below
- Minimal but scalable implementation of large language models in JAX☆32Updated 3 months ago
- Distributed pretraining of large language models (LLMs) on cloud TPU slices, with Jax and Equinox.☆23Updated 4 months ago
- A simple library for scaling up JAX programs☆129Updated 3 months ago
- ☆75Updated 7 months ago
- JAX bindings for Flash Attention v2☆85Updated 7 months ago
- seqax = sequence modeling + JAX☆143Updated 7 months ago
- If it quacks like a tensor...☆56Updated 3 months ago
- A library for unit scaling in PyTorch☆122Updated 2 months ago
- Experiment of using Tangent to autodiff triton☆75Updated last year
- LoRA for arbitrary JAX models and functions☆135Updated 11 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆43Updated 7 months ago
- A set of Python scripts that makes your experience on TPU better☆48Updated 7 months ago
- JAX implementation of the Mistral 7b v0.2 model☆35Updated 7 months ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated 2 months ago
- ☆21Updated 3 months ago
- Named Tensors for Legible Deep Learning in JAX☆161Updated this week
- Experimenting with how best to do multi-host dataloading☆10Updated 2 years ago
- Inference code for LLaMA models in JAX☆114Updated 9 months ago
- ☆211Updated 7 months ago
- This repository contains the experimental PyTorch native float8 training UX☆221Updated 6 months ago
- An implementation of the Llama architecture, to instruct and delight☆21Updated last month
- ☆20Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆78Updated 2 years ago
- supporting pytorch FSDP for optimizers☆76Updated 2 months ago
- Einsum-like high-level array sharding API for JAX☆33Updated 7 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆116Updated 2 months ago
- FlashRNN - Fast RNN Kernels with I/O Awareness☆75Updated 2 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆68Updated 10 months ago
- Triton Implementation of HyperAttention Algorithm☆46Updated last year
- A user-friendly tool chain that enables the seamless execution of ONNX models using JAX as the backend.☆107Updated 3 weeks ago