KellerJordan / cifar10-airbenchLinks
CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds
☆291Updated last month
Alternatives and similar repositories for cifar10-airbench
Users that are interested in cifar10-airbench are comparing it to the libraries listed below
Sorting:
- Efficient optimizers☆261Updated last month
- ☆208Updated 9 months ago
- For optimization algorithm research and development.☆534Updated last week
- supporting pytorch FSDP for optimizers☆84Updated 9 months ago
- 🧱 Modula software package☆233Updated 3 weeks ago
- The AdEMAMix Optimizer: Better, Faster, Older.☆186Updated last year
- ☆279Updated last year
- Accelerated First Order Parallel Associative Scan☆187Updated last year
- Dion optimizer algorithm☆338Updated last week
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆290Updated 3 months ago
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆392Updated last week
- Normalized Transformer (nGPT)☆188Updated 9 months ago
- Implementation of Diffusion Transformer (DiT) in JAX☆291Updated last year
- Code and weights for the paper "Cluster and Predict Latents Patches for Improved Masked Image Modeling"☆118Updated 5 months ago
- Getting crystal-like representations with harmonic loss☆194Updated 5 months ago
- Annotated version of the Mamba paper☆489Updated last year
- WIP☆94Updated last year
- A library for unit scaling in PyTorch☆130Updated 2 months ago
- ☆87Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆157Updated 2 months ago
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆331Updated 8 months ago
- ☆299Updated 4 months ago
- Library for Jacobian descent with PyTorch. It enables the optimization of neural networks with multiple losses (e.g. multi-task learning)…☆266Updated this week
- ☆57Updated 11 months ago
- Universal Tensor Operations in Einstein-Inspired Notation for Python.☆408Updated 5 months ago
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆180Updated this week
- Simple, minimal implementation of the Mamba SSM in one pytorch file. Using logcumsumexp (Heisen sequence).☆122Updated 10 months ago
- When it comes to optimizers, it's always better to be safe than sorry☆367Updated 2 weeks ago
- ☆307Updated last year
- seqax = sequence modeling + JAX☆166Updated last month