☆306Jul 15, 2024Updated last year
Alternatives and similar repositories for nanodo
Users that are interested in nanodo are comparing it to the libraries listed below
Sorting:
- seqax = sequence modeling + JAX☆188Jul 23, 2025Updated 7 months ago
- A simple, performant and scalable Jax LLM!☆2,170Updated this week
- ☆570Jul 11, 2024Updated last year
- Minimal but scalable implementation of large language models in JAX☆35Nov 28, 2025Updated 3 months ago
- ☆16Oct 20, 2025Updated 5 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆698Jan 26, 2026Updated last month
- Minimal (truly) muP implementation, consistent with TP4 and TP5 papers notation☆14Jan 2, 2026Updated 2 months ago
- A set of Python scripts that makes your experience on TPU better☆56Sep 18, 2025Updated 6 months ago
- A JAX research toolkit for building, editing, and visualizing neural networks.☆1,873Jun 22, 2025Updated 9 months ago
- Train very large language models in Jax.☆210Oct 21, 2023Updated 2 years ago
- JAX implementation of the Mistral 7b v0.2 model☆35Jul 3, 2024Updated last year
- ☆27Jul 9, 2024Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Apr 17, 2024Updated last year
- Minimal yet performant LLM examples in pure JAX☆245Jan 14, 2026Updated 2 months ago
- Distributed pretraining of large language models (LLMs) on cloud TPU slices, with Jax and Equinox.☆25Sep 29, 2024Updated last year
- Compositional Linear Algebra☆510Aug 1, 2025Updated 7 months ago
- ☆35Apr 12, 2024Updated last year
- ☆92Jul 5, 2024Updated last year
- supporting pytorch FSDP for optimizers☆84Dec 8, 2024Updated last year
- WIP☆94Aug 13, 2024Updated last year
- A simple library for scaling up JAX programs☆146Nov 4, 2025Updated 4 months ago
- Einsum-like high-level array sharding API for JAX☆34Jul 16, 2024Updated last year
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam☆86Jul 28, 2024Updated last year
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆416Updated this week
- An implementation of the Llama architecture, to instruct and delight☆21May 31, 2025Updated 9 months ago
- A library for unit scaling in PyTorch☆133Jul 11, 2025Updated 8 months ago
- Library for reading and processing ML training data.☆694Updated this week
- maximal update parametrization (µP)☆1,690Jul 17, 2024Updated last year
- JAX Synergistic Memory Inspector☆185Jul 16, 2024Updated last year
- Implementation of Diffusion Transformer (DiT) in JAX☆308Jun 11, 2024Updated last year
- CLU lets you write beautiful training loops in JAX.☆367Mar 3, 2026Updated 2 weeks ago
- Simple Transformer in Jax☆143Jun 22, 2024Updated last year
- Code for the paper "Function-Space Learning Rates"☆25Jun 3, 2025Updated 9 months ago
- ☆28Sep 22, 2025Updated 5 months ago
- Experiment of using Tangent to autodiff triton☆82Jan 22, 2024Updated 2 years ago
- ☆54May 20, 2024Updated last year
- ☆22Apr 22, 2024Updated last year
- 🧱 Modula software package☆324Aug 18, 2025Updated 7 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆190Jan 19, 2026Updated 2 months ago