divyamakkar0 / JAXformerLinks
A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.
☆112Updated 2 weeks ago
Alternatives and similar repositories for JAXformer
Users that are interested in JAXformer are comparing it to the libraries listed below
Sorting:
- MoE training for Me and You and maybe other people☆315Updated last week
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆195Updated 7 months ago
- Simple Transformer in Jax☆140Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 8 months ago
- Minimal yet performant LLM examples in pure JAX☆226Updated last week
- ☆116Updated last week
- SIMD quantization kernels☆92Updated 4 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆181Updated 6 months ago
- ☆287Updated last year
- ☆92Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆109Updated 10 months ago
- seqax = sequence modeling + JAX☆169Updated 5 months ago
- Custom triton kernels for training Karpathy's nanoGPT.☆19Updated last year
- Dion optimizer algorithm☆416Updated last week
- ☆224Updated last month
- ☆27Updated last year
- rl from zero pretrain, can it be done? yes.