mlcommons / algorithmic-efficiency
MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.
β367Updated this week
Alternatives and similar repositories for algorithmic-efficiency:
Users that are interested in algorithmic-efficiency are comparing it to the libraries listed below
- For optimization algorithm research and development.β497Updated this week
- β418Updated 4 months ago
- 𧱠Modula software packageβ151Updated this week
- jax-triton contains integrations between JAX and OpenAI Tritonβ381Updated last month
- β212Updated 7 months ago
- Named tensors with first-class dimensions for PyTorchβ321Updated last year
- Automatic gradient descentβ207Updated last year
- β219Updated 2 weeks ago
- Compositional Linear Algebraβ462Updated 3 weeks ago
- β286Updated last week
- Orbax provides common checkpointing and persistence utilities for JAX usersβ340Updated this week
- β161Updated 3 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jaxβ550Updated this week
- A library for unit scaling in PyTorchβ123Updated 3 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.β515Updated last week
- Library for reading and processing ML training data.β392Updated this week
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation preconditionβ¦β168Updated 2 months ago
- CLU lets you write beautiful training loops in JAX.β332Updated 3 weeks ago
- Efficient optimizersβ177Updated last week
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimentaβ¦β480Updated 3 weeks ago
- TensorDict is a pytorch dedicated tensor container.β889Updated this week
- Puzzles for exploring transformersβ333Updated last year
- Run PyTorch in JAX. π€β222Updated 2 weeks ago
- β182Updated last week
- ASDL: Automatic Second-order Differentiation Library for PyTorchβ184Updated 2 months ago
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 secondsβ210Updated this week
- JMP is a Mixed Precision library for JAX.β192Updated last month
- Named Tensors for Legible Deep Learning in JAXβ163Updated this week
- β301Updated 8 months ago
- seqax = sequence modeling + JAXβ145Updated this week