google-deepmind / nanodoLinks
☆274Updated last year
Alternatives and similar repositories for nanodo
Users that are interested in nanodo are comparing it to the libraries listed below
Sorting:
- seqax = sequence modeling + JAX☆165Updated last week
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆627Updated this week
- 🧱 Modula software package☆210Updated this week
- ☆137Updated last week
- A simple library for scaling up JAX programs☆140Updated 9 months ago
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆440Updated last week
- Efficient optimizers☆252Updated last week
- JAX Synergistic Memory Inspector☆177Updated last year
- Cost aware hyperparameter tuning algorithm☆166Updated last year
- LoRA for arbitrary JAX models and functions☆140Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆123Updated 7 months ago
- Minimal but scalable implementation of large language models in JAX☆35Updated last week
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆149Updated last month
- ☆232Updated 5 months ago
- Distributed pretraining of large language models (LLMs) on cloud TPU slices, with Jax and Equinox.☆24Updated 10 months ago
- supporting pytorch FSDP for optimizers☆84Updated 7 months ago
- Named Tensors for Legible Deep Learning in JAX☆194Updated this week
- ☆443Updated 9 months ago
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆389Updated this week
- Puzzles for exploring transformers☆355Updated 2 years ago
- Understand and test language model architectures on synthetic tasks.☆221Updated 2 weeks ago
- jax-triton contains integrations between JAX and OpenAI Triton☆411Updated last month
- Accelerated First Order Parallel Associative Scan☆184Updated 11 months ago
- Implementation of Diffusion Transformer (DiT) in JAX☆280Updated last year
- Implementation of PSGD optimizer in JAX☆34Updated 7 months ago
- ☆347Updated this week
- JAX implementation of the Llama 2 model☆219Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆130Updated last year
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆293Updated last week
- ☆82Updated last year