google-deepmind / nanodo
โ215Updated 8 months ago
Alternatives and similar repositories for nanodo:
Users that are interested in nanodo are comparing it to the libraries listed below
- seqax = sequence modeling + JAXโ153Updated this week
- ๐งฑ Modula software packageโ187Updated 2 weeks ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jaxโ565Updated this week
- LoRA for arbitrary JAX models and functionsโ136Updated last year
- A simple library for scaling up JAX programsโ134Updated 5 months ago
- โ97Updated this week
- Named Tensors for Legible Deep Learning in JAXโ170Updated 2 weeks ago
- A MAD laboratory to improve AI architecture designs ๐งชโ109Updated 3 months ago
- Cost aware hyperparameter tuning algorithmโ150Updated 9 months ago
- Efficient optimizersโ188Updated this week
- JAX Synergistic Memory Inspectorโ171Updated 8 months ago
- For optimization algorithm research and development.โ505Updated this week
- jax-triton contains integrations between JAX and OpenAI Tritonโ389Updated this week
- Implementation of Diffusion Transformer (DiT) in JAXโ270Updated 10 months ago
- โ152Updated this week
- โ224Updated 2 months ago
- JAX implementation of the Llama 2 modelโ217Updated last year
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUsโ237Updated last week
- Accelerated First Order Parallel Associative Scanโ180Updated 7 months ago
- โ186Updated this week
- Puzzles for exploring transformersโ339Updated last year
- Understand and test language model architectures on synthetic tasks.โ190Updated last month
- โ76Updated 9 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.โ103Updated 4 months ago
- Inference code for LLaMA models in JAXโ116Updated 10 months ago
- Run PyTorch in JAX. ๐คโ234Updated last month
- Orbax provides common checkpointing and persistence utilities for JAX usersโ362Updated this week
- supporting pytorch FSDP for optimizersโ80Updated 4 months ago
- โ173Updated 4 months ago
- Library for reading and processing ML training data.โ421Updated this week