google-deepmind / nanodoLinks
☆279Updated last year
Alternatives and similar repositories for nanodo
Users that are interested in nanodo are comparing it to the libraries listed below
Sorting:
- seqax = sequence modeling + JAX☆166Updated last month
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆651Updated last week
- 🧱 Modula software package☆233Updated 3 weeks ago
- Minimal yet performant LLM examples in pure JAX☆152Updated this week
- A simple library for scaling up JAX programs☆143Updated 10 months ago
- JAX Synergistic Memory Inspector☆179Updated last year
- LoRA for arbitrary JAX models and functions☆142Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆157Updated 2 months ago
- Named Tensors for Legible Deep Learning in JAX☆201Updated last week
- Efficient optimizers☆261Updated last month
- Jax/Flax rewrite of Karpathy's nanoGPT☆60Updated 2 years ago
- Cost aware hyperparameter tuning algorithm☆169Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆128Updated 8 months ago
- Library for reading and processing ML training data.☆531Updated this week
- Minimal but scalable implementation of large language models in JAX☆35Updated last week
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆308Updated this week
- Accelerated First Order Parallel Associative Scan☆187Updated last year
- Puzzles for exploring transformers☆368Updated 2 years ago
- Distributed pretraining of large language models (LLMs) on cloud TPU slices, with Jax and Equinox.☆24Updated 11 months ago
- Implementation of Diffusion Transformer (DiT) in JAX☆291Updated last year
- JAX implementation of the Llama 2 model☆219Updated last year
- ☆452Updated 10 months ago
- jax-triton contains integrations between JAX and OpenAI Triton☆416Updated last week
- Understand and test language model architectures on synthetic tasks.☆224Updated 2 months ago
- Dion optimizer algorithm☆338Updated last week
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆392Updated last week
- Train very large language models in Jax.☆208Updated last year
- For optimization algorithm research and development.☆534Updated last week
- Implementation of PSGD optimizer in JAX☆34Updated 8 months ago
- ☆233Updated 7 months ago