mlcommons / algorithmic-efficiency
MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.
☆375Updated last week
Alternatives and similar repositories for algorithmic-efficiency:
Users that are interested in algorithmic-efficiency are comparing it to the libraries listed below
- For optimization algorithm research and development.☆508Updated this week
- Named tensors with first-class dimensions for PyTorch☆320Updated last year
- Compositional Linear Algebra☆474Updated 3 weeks ago
- ☆224Updated 2 months ago
- Universal Tensor Operations in Einstein-Inspired Notation for Python.☆367Updated 2 weeks ago
- 🧱 Modula software package☆188Updated 3 weeks ago
- CLU lets you write beautiful training loops in JAX.☆337Updated 2 weeks ago
- jax-triton contains integrations between JAX and OpenAI Triton☆391Updated 2 weeks ago
- functorch is JAX-like composable function transforms for PyTorch.☆1,422Updated this week
- TensorDict is a pytorch dedicated tensor container.☆911Updated this week
- ☆776Updated 2 weeks ago
- A Jax-based library for designing and training small transformers.☆286Updated 7 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆569Updated this week
- Orbax provides common checkpointing and persistence utilities for JAX users☆370Updated this week
- ☆216Updated 9 months ago
- BackPACK - a backpropagation package built on top of PyTorch which efficiently computes quantities other than the gradient.☆578Updated 3 months ago
- TorchOpt is an efficient library for differentiable optimization built upon PyTorch.☆581Updated 2 weeks ago
- Library for reading and processing ML training data.☆432Updated this week
- Type annotations and runtime checking for shape and dtype of JAX/NumPy/PyTorch/etc. arrays. https://docs.kidger.site/jaxtyping/☆1,385Updated this week
- Run PyTorch in JAX. 🤝☆236Updated 2 months ago
- JAX Synergistic Memory Inspector☆172Updated 9 months ago
- Annotated version of the Mamba paper☆483Updated last year
- Hardware accelerated, batchable and differentiable optimizers in JAX.☆960Updated last week
- ☆177Updated 4 months ago
- ☆297Updated this week
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆489Updated last week
- Automatic gradient descent☆207Updated last year
- ASDL: Automatic Second-order Differentiation Library for PyTorch☆185Updated 4 months ago
- ☆302Updated 10 months ago
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆230Updated last month