mlcommons / algorithmic-efficiency
MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.
☆372Updated this week
Alternatives and similar repositories for algorithmic-efficiency:
Users that are interested in algorithmic-efficiency are comparing it to the libraries listed below
- For optimization algorithm research and development.☆502Updated this week
- jax-triton contains integrations between JAX and OpenAI Triton☆388Updated last week
- TensorDict is a pytorch dedicated tensor container.☆905Updated this week
- ☆215Updated 8 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆564Updated this week
- Compositional Linear Algebra☆467Updated this week
- Universal Tensor Operations in Einstein-Inspired Notation for Python.☆364Updated last month
- 🧱 Modula software package☆187Updated last week
- Orbax provides common checkpointing and persistence utilities for JAX users☆357Updated this week
- ☆293Updated last week
- ☆221Updated last month
- Named tensors with first-class dimensions for PyTorch☆321Updated last year
- Annotated version of the Mamba paper☆478Updated last year
- CLU lets you write beautiful training loops in JAX.☆335Updated last month
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆526Updated last month
- functorch is JAX-like composable function transforms for PyTorch.☆1,417Updated this week
- Library for reading and processing ML training data.☆420Updated this week
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆485Updated 2 weeks ago
- ☆185Updated last week
- ASDL: Automatic Second-order Differentiation Library for PyTorch☆185Updated 4 months ago
- Run PyTorch in JAX. 🤝☆232Updated last month
- ☆302Updated 9 months ago
- ☆423Updated 5 months ago
- OpTree: Optimized PyTree Utilities☆175Updated this week
- Helpful tools and examples for working with flex-attention☆707Updated this week
- ☆345Updated this week
- A Jax-based library for designing and training transformer models from scratch.☆284Updated 7 months ago
- ☆844Updated this week
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆171Updated last week
- TorchFix - a linter for PyTorch-using code with autofix support☆137Updated last month