google-deepmind / nanodoLinks
β287Updated last year
Alternatives and similar repositories for nanodo
Users that are interested in nanodo are comparing it to the libraries listed below
Sorting:
- seqax = sequence modeling + JAXβ169Updated 5 months ago
- π§± Modula software packageβ322Updated 4 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jaxβ688Updated 2 weeks ago
- Minimal yet performant LLM examples in pure JAXβ226Updated last week
- A simple library for scaling up JAX programsβ144Updated 2 months ago
- JAX Synergistic Memory Inspectorβ183Updated last year
- Efficient optimizersβ280Updated 3 weeks ago
- Cost aware hyperparameter tuning algorithmβ177Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.β181Updated 6 months ago
- LoRA for arbitrary JAX models and functionsβ143Updated last year
- Named Tensors for Legible Deep Learning in JAXβ215Updated 2 months ago
- JAX implementation of the Llama 2 modelβ215Updated last year
- A MAD laboratory to improve AI architecture designs π§ͺβ136Updated last year
- Minimal but scalable implementation of large language models in JAXβ35Updated last month
- Implementation of Diffusion Transformer (DiT) in JAXβ300Updated last year
- Accelerate, Optimize performance with streamlined training and serving options with JAX.β328Updated this week
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 secondsβ340Updated last month
- β233Updated 11 months ago
- Accelerated First Order Parallel Associative Scanβ192Updated this week
- supporting pytorch FSDP for optimizersβ84Updated last year
- Distributed pretraining of large language models (LLMs) on cloud TPU slices, with Jax and Equinox.β24Updated last year
- β461Updated last year
- β92Updated last year
- Implementation of PSGD optimizer in JAXβ35Updated last year
- Understand and test language model architectures on synthetic tasks.β248Updated 3 months ago
- jax-triton contains integrations between JAX and OpenAI Tritonβ436Updated last month
- MoE training for Me and You and maybe other peopleβ315Updated last week
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvementβ¦β406Updated this week
- Train very large language models in Jax.β210Updated 2 years ago
- Puzzles for exploring transformersβ382Updated 2 years ago