google-deepmind / nanodoLinks
β285Updated last year
Alternatives and similar repositories for nanodo
Users that are interested in nanodo are comparing it to the libraries listed below
Sorting:
- π§± Modula software packageβ316Updated 4 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jaxβ687Updated 3 weeks ago
- seqax = sequence modeling + JAXβ169Updated 5 months ago
- Minimal yet performant LLM examples in pure JAXβ219Updated 2 weeks ago
- A simple library for scaling up JAX programsβ144Updated last month
- JAX Synergistic Memory Inspectorβ183Updated last year
- LoRA for arbitrary JAX models and functionsβ143Updated last year
- Named Tensors for Legible Deep Learning in JAXβ215Updated last month
- Efficient optimizersβ277Updated this week
- Minimal but scalable implementation of large language models in JAXβ35Updated 3 weeks ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.β180Updated 5 months ago
- Cost aware hyperparameter tuning algorithmβ176Updated last year
- A MAD laboratory to improve AI architecture designs π§ͺβ135Updated last year
- Accelerate, Optimize performance with streamlined training and serving options with JAX.β325Updated last week
- Implementation of PSGD optimizer in JAXβ35Updated 11 months ago
- β234Updated 10 months ago
- Jax/Flax rewrite of Karpathy's nanoGPTβ62Updated 2 years ago
- Implementation of Diffusion Transformer (DiT) in JAXβ298Updated last year
- Understand and test language model architectures on synthetic tasks.β246Updated 2 months ago
- Distributed pretraining of large language models (LLMs) on cloud TPU slices, with Jax and Equinox.β24Updated last year
- Accelerated First Order Parallel Associative Scanβ193Updated last year
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 secondsβ335Updated last month
- β91Updated last year
- jax-triton contains integrations between JAX and OpenAI Tritonβ436Updated last week
- supporting pytorch FSDP for optimizersβ84Updated last year
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvementβ¦β404Updated this week
- β460Updated last year
- Dion optimizer algorithmβ404Updated last week
- Library for reading and processing ML training data.β633Updated this week
- β229Updated last year