divyamakkar0 / JAXformerLinks
A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.
☆103Updated 3 weeks ago
Alternatives and similar repositories for JAXformer
Users that are interested in JAXformer are comparing it to the libraries listed below
Sorting:
- NanoGPT-speedrunning for the poor T4 enjoyers☆72Updated 6 months ago
- Simple Transformer in Jax☆139Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆107Updated 7 months ago
- Minimal yet performant LLM examples in pure JAX☆186Updated last month
- seqax = sequence modeling + JAX☆168Updated 3 months ago
- ☆283Updated last year
- ☆105Updated this week
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆193Updated 4 months ago
- ☆28Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆166Updated 3 months ago
- DeMo: Decoupled Momentum Optimization☆194Updated 10 months ago
- supporting pytorch FSDP for optimizers☆83Updated 10 months ago
- PyTorch-native post-training at scale☆83Updated this week
- ☆91Updated last year
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆317Updated this week
- Custom triton kernels for training Karpathy's nanoGPT.☆19Updated last year
- 🧱 Modula software package☆291Updated 2 months ago
- Training-Ready RL Environments + Evals☆128Updated last week
- A FlashAttention implementation for JAX with support for efficient document mask computation and context parallelism.☆147Updated 6 months ago
- Cost aware hyperparameter tuning algorithm☆171Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year
- Storing long contexts in tiny caches with self-study☆201Updated last week
- Solve puzzles. Learn CUDA.☆64Updated last year
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆296Updated 2 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆131Updated 10 months ago
- Collection of autoregressive model implementation☆86Updated 6 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- A Jax-based library for building transformers, includes implementations of GPT, Gemma, LlaMa, Mixtral, Whisper, SWin, ViT and more.☆295Updated last year
- Dion optimizer algorithm☆369Updated 3 weeks ago
- look how they massacred my boy☆63Updated last year