jax-ml / scaling-book
Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs
☆260Updated 2 weeks ago
Alternatives and similar repositories for scaling-book:
Users that are interested in scaling-book are comparing it to the libraries listed below
- ☆217Updated 9 months ago
- seqax = sequence modeling + JAX☆155Updated last month
- ☆109Updated this week
- jax-triton contains integrations between JAX and OpenAI Triton☆393Updated last week
- 🧱 Modula software package☆188Updated last month
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆131Updated last year
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆569Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆536Updated this week
- Named Tensors for Legible Deep Learning in JAX☆172Updated last week
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆180Updated this week
- ☆450Updated 9 months ago
- ☆166Updated this week
- ☆184Updated 2 months ago
- ☆297Updated last week
- Puzzles for exploring transformers☆346Updated 2 years ago
- ☆186Updated last week
- ☆431Updated 6 months ago
- A stand-alone implementation of several NumPy dtype extensions used in machine learning.☆261Updated this week
- Accelerated First Order Parallel Associative Scan☆182Updated 8 months ago
- Scalable and Performant Data Loading☆252Updated this week
- Cost aware hyperparameter tuning algorithm☆151Updated 10 months ago
- Universal Tensor Operations in Einstein-Inspired Notation for Python.☆369Updated last month
- ring-attention experiments☆140Updated 6 months ago
- A simple library for scaling up JAX programs☆134Updated 6 months ago
- Orbax provides common checkpointing and persistence utilities for JAX users☆376Updated this week
- ☆202Updated 2 weeks ago
- JAX bindings for Flash Attention v2☆89Updated 9 months ago
- JAX-Toolbox☆301Updated this week
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆380Updated this week
- Run PyTorch in JAX. 🤝☆240Updated 2 months ago