KellerJordan / hlb-CIFAR10
Train to 94% on CIFAR-10 in 4.4 seconds on a single A100
☆12Updated last year
Alternatives and similar repositories for hlb-CIFAR10:
Users that are interested in hlb-CIFAR10 are comparing it to the libraries listed below
- supporting pytorch FSDP for optimizers☆80Updated 4 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆123Updated last year
- Simple Transformer in Jax☆136Updated 10 months ago
- A set of Python scripts that makes your experience on TPU better☆51Updated 9 months ago
- ☆78Updated 9 months ago
- 🧱 Modula software package☆188Updated last month
- ☆53Updated last year
- ☆102Updated this week
- Minimal but scalable implementation of large language models in JAX☆34Updated 5 months ago
- Efficient optimizers☆189Updated this week
- ☆60Updated 3 years ago
- ☆94Updated 3 months ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆82Updated last year
- seqax = sequence modeling + JAX☆154Updated 3 weeks ago
- LoRA for arbitrary JAX models and functions☆136Updated last year
- Experiment of using Tangent to autodiff triton☆78Updated last year
- JAX implementation of the Llama 2 model☆218Updated last year
- ☆79Updated last year
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆189Updated 11 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆63Updated this week
- ☆51Updated 11 months ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆83Updated last year
- ☆27Updated 9 months ago
- ☆49Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆113Updated 4 months ago
- ☆216Updated 9 months ago
- Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers☆92Updated 9 months ago
- WIP☆93Updated 8 months ago
- Custom triton kernels for training Karpathy's nanoGPT.☆18Updated 6 months ago
- ☆20Updated last year