mlcommons / algorithmic-efficiencyLinks
MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.
β390Updated this week
Alternatives and similar repositories for algorithmic-efficiency
Users that are interested in algorithmic-efficiency are comparing it to the libraries listed below
Sorting:
- For optimization algorithm research and development.β530Updated this week
- π§± Modula software packageβ225Updated last week
- β233Updated 6 months ago
- Named tensors with first-class dimensions for PyTorchβ331Updated 2 years ago
- β275Updated last year
- jax-triton contains integrations between JAX and OpenAI Tritonβ414Updated 2 months ago
- β447Updated 10 months ago
- Universal Tensor Operations in Einstein-Inspired Notation for Python.β399Updated 4 months ago
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation preconditionβ¦β179Updated this week
- Compositional Linear Algebraβ489Updated 3 weeks ago
- Library for reading and processing ML training data.β505Updated this week
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jaxβ646Updated this week
- TensorDict is a pytorch dedicated tensor container.β955Updated last week
- A Jax-based library for building transformers, includes implementations of GPT, Gemma, LlaMa, Mixtral, Whisper, SWin, ViT and more.β290Updated 11 months ago
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 secondsβ284Updated last month
- Puzzles for exploring transformersβ366Updated 2 years ago
- β350Updated 2 weeks ago
- β325Updated 3 weeks ago
- Unofficial JAX implementations of deep learning research papersβ156Updated 3 years ago
- JAX Synergistic Memory Inspectorβ179Updated last year
- β188Updated 3 weeks ago
- A library for unit scaling in PyTorchβ129Updated last month
- Orbax provides common checkpointing and persistence utilities for JAX usersβ415Updated this week
- Annotated version of the Mamba paperβ488Updated last year
- CLU lets you write beautiful training loops in JAX.β355Updated 2 months ago
- Efficient optimizersβ256Updated 3 weeks ago
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUsβ523Updated this week
- ASDL: Automatic Second-order Differentiation Library for PyTorchβ189Updated 8 months ago
- Run PyTorch in JAX. π€β283Updated this week
- Implementation of Flash Attention in Jaxβ216Updated last year