google-deepmind / nanodoLinks
β281Updated last year
Alternatives and similar repositories for nanodo
Users that are interested in nanodo are comparing it to the libraries listed below
Sorting:
- seqax = sequence modeling + JAXβ167Updated 2 months ago
- π§± Modula software packageβ277Updated last month
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jaxβ667Updated this week
- Minimal yet performant LLM examples in pure JAXβ177Updated last week
- A simple library for scaling up JAX programsβ143Updated 11 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.β164Updated 3 months ago
- JAX Synergistic Memory Inspectorβ180Updated last year
- Efficient optimizersβ265Updated last week
- LoRA for arbitrary JAX models and functionsβ142Updated last year
- Minimal but scalable implementation of large language models in JAXβ35Updated last month
- Cost aware hyperparameter tuning algorithmβ168Updated last year
- supporting pytorch FSDP for optimizersβ84Updated 9 months ago
- β233Updated 7 months ago
- Implementation of Diffusion Transformer (DiT) in JAXβ291Updated last year
- Named Tensors for Legible Deep Learning in JAXβ205Updated 2 weeks ago
- A MAD laboratory to improve AI architecture designs π§ͺβ129Updated 9 months ago
- Puzzles for exploring transformersβ371Updated 2 years ago
- Accelerate, Optimize performance with streamlined training and serving options with JAX.β311Updated this week
- Library for reading and processing ML training data.β548Updated this week
- For optimization algorithm research and development.β539Updated last week
- Understand and test language model architectures on synthetic tasks.β229Updated last week
- β215Updated 10 months ago
- Accelerated First Order Parallel Associative Scanβ190Updated last year
- β455Updated 11 months ago
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 secondsβ301Updated 2 months ago
- JAX implementation of the Llama 2 modelβ218Updated last year
- β89Updated last year
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvementβ¦β398Updated last week
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT trainingβ132Updated last year
- Implementation of PSGD optimizer in JAXβ34Updated 9 months ago