hiverge / cifar10-speedrunLinks
CIFAR-10 speedrun: Trains to 94% accuracy in 1.98 seconds on a single NVIDIA A100 GPU.
☆40Updated 3 weeks ago
Alternatives and similar repositories for cifar10-speedrun
Users that are interested in cifar10-speedrun are comparing it to the libraries listed below
Sorting:
- train with kittens!☆63Updated last year
- SIMD quantization kernels☆91Updated 2 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆132Updated 10 months ago
- JAX implementation of the Mistral 7b v0.2 model☆34Updated last year
- ☆103Updated 3 months ago
- 🧱 Modula software package☆300Updated 2 months ago
- Attention Kernels for Symmetric Power Transformers☆123Updated last month
- Minimal yet performant LLM examples in pure JAX☆193Updated last month
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆193Updated last year
- Training code for Sparse Autoencoders on Embedding models☆38Updated 8 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆171Updated 4 months ago
- ☆38Updated last year
- Einsum-like high-level array sharding API for JAX☆34Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆107Updated 8 months ago
- Train to 94% on CIFAR-10 in 4.4 seconds on a single A100☆12Updated last year
- ☆166Updated 2 years ago
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆138Updated last month
- ☆60Updated 3 years ago
- ☆283Updated last year
- ☆221Updated 11 months ago
- ☆21Updated last year
- Simple Transformer in Jax☆139Updated last year
- gzip Predicts Data-dependent Scaling Laws☆34Updated last year
- Jax like function transformation engine but micro, microjax☆33Updated last year
- Latent Program Network (from the "Searching Latent Program Spaces" paper)☆102Updated last month
- Experiment of using Tangent to autodiff triton☆79Updated last year
- ☆28Updated last year
- A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.☆104Updated last month
- ☆198Updated 2 months ago
- ☆32Updated 7 months ago