fattorib / ZeRO-transformerLinks
Two implementations of ZeRO-1 optimizer sharding in JAX
☆14Updated 2 years ago
Alternatives and similar repositories for ZeRO-transformer
Users that are interested in ZeRO-transformer are comparing it to the libraries listed below
Sorting:
- Minimal but scalable implementation of large language models in JAX☆35Updated 2 weeks ago
- Minimal yet performant LLM examples in pure JAX☆158Updated this week
- seqax = sequence modeling + JAX☆167Updated last month
- JAX bindings for Flash Attention v2☆91Updated last week
- A simple library for scaling up JAX programs☆143Updated 10 months ago
- ☆188Updated 2 weeks ago
- Distributed pretraining of large language models (LLMs) on cloud TPU slices, with Jax and Equinox.☆24Updated 11 months ago
- Experimenting with how best to do multi-host dataloading☆10Updated 2 years ago
- A set of Python scripts that makes your experience on TPU better☆54Updated last year
- Experiment of using Tangent to autodiff triton☆81Updated last year
- LoRA for arbitrary JAX models and functions☆142Updated last year
- A JAX-native LLM Post-Training Library☆143Updated this week
- Custom triton kernels for training Karpathy's nanoGPT.☆19Updated 10 months ago
- A library for unit scaling in PyTorch☆130Updated 2 months ago
- Accelerated First Order Parallel Associative Scan☆188Updated last year
- ☆279Updated last year
- ☆330Updated this week
- ☆88Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆224Updated last year
- Train very large language models in Jax.☆208Updated last year
- JAX implementation of the Mistral 7b v0.2 model☆36Updated last year
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆265Updated last month
- Implementation of Flash Attention in Jax☆216Updated last year
- Inference code for LLaMA models in JAX☆120Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆160Updated 2 months ago
- ☆22Updated 10 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆129Updated 9 months ago
- A FlashAttention implementation for JAX with support for efficient document mask computation and context parallelism.☆139Updated 5 months ago
- ☆21Updated 6 months ago