divyamakkar0 / JAXformerLinks
A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.
☆114Updated last month
Alternatives and similar repositories for JAXformer
Users that are interested in JAXformer are comparing it to the libraries listed below
Sorting:
- MoE training for Me and You and maybe other people☆331Updated last month
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 9 months ago
- Simple Transformer in Jax☆142Updated last year
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆197Updated 8 months ago
- Minimal yet performant LLM examples in pure JAX☆236Updated 3 weeks ago
- ☆289Updated last year
- seqax = sequence modeling + JAX☆170Updated 6 months ago
- SIMD quantization kernels☆94Updated 4 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆68Updated this week
- Quantized LLM training in pure CUDA/C++.☆235Updated 2 weeks ago
- ☆116Updated last week
- Compiling useful links, papers, benchmarks, ideas, etc.☆46Updated 10 months ago
- 🧱 Modula software package☆322Updated 5 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆155Updated 2 years ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆186Updated 2 weeks ago
- Dion optimizer algorithm☆424Updated 2 weeks ago
- ☆27Updated last year
- A FlashAttention implementation for JAX with support for efficient document mask computation and context parallelism.☆157Updated 2 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆110Updated 10 months ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆86Updated 2 years ago
- This repository contain the simple llama3 implementation in pure jax.☆71Updated 11 months ago
- ☆92Updated last year
- Solve puzzles. Learn CUDA.☆63Updated 2 years ago
- supporting pytorch FSDP for optimizers☆84Updated last year
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆141Updated 4 months ago
- Cost aware hyperparameter tuning algorithm☆177Updated last year
- look how they massacred my boy☆63Updated last year
- Custom triton kernels for training Karpathy's nanoGPT.☆19Updated last year
- ☆230Updated 2 months ago
- rl from zero pretrain, can it be done? yes.☆286Updated 4 months ago