KellerJordan / hlb-CIFAR10Links
Train to 94% on CIFAR-10 in 4.4 seconds on a single A100
β12Updated last year
Alternatives and similar repositories for hlb-CIFAR10
Users that are interested in hlb-CIFAR10 are comparing it to the libraries listed below
Sorting:
- Simple Transformer in Jaxβ138Updated last year
- π§± Modula software packageβ207Updated 3 months ago
- A set of Python scripts that makes your experience on TPU betterβ55Updated last year
- supporting pytorch FSDP for optimizersβ83Updated 7 months ago
- Large scale 4D parallelism pre-training for π€ transformers in Mixture of Experts *(still work in progress)*β85Updated last year
- Efficient optimizersβ234Updated last week
- β20Updated 2 years ago
- seqax = sequence modeling + JAXβ165Updated last month
- JAX implementation of the Llama 2 modelβ219Updated last year
- β80Updated last year
- LoRA for arbitrary JAX models and functionsβ140Updated last year
- Experiment of using Tangent to autodiff tritonβ79Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT trainingβ129Updated last year
- β53Updated last year
- β274Updated last year
- β49Updated last year
- β135Updated this week
- Inference code for LLaMA models in JAXβ118Updated last year
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wiβ¦β348Updated 11 months ago
- Fast, Modern, and Low Precision PyTorch Optimizersβ98Updated this week
- If it quacks like a tensor...β58Updated 8 months ago
- A MAD laboratory to improve AI architecture designs π§ͺβ123Updated 7 months ago
- β61Updated 3 years ago
- Solve puzzles. Learn CUDA.β64Updated last year
- Distributed pretraining of large language models (LLMs) on cloud TPU slices, with Jax and Equinox.β24Updated 9 months ago
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 secondsβ263Updated 4 months ago
- DeMo: Decoupled Momentum Optimizationβ189Updated 7 months ago
- NanoGPT-speedrunning for the poor T4 enjoyersβ68Updated 2 months ago
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation preconditionβ¦β179Updated last month
- JAX implementation of the Mistral 7b v0.2 modelβ35Updated last year