alexjc / nanogpt-speedrun
NanoGPT (124M) in 5 minutes
☆9Updated last month
Alternatives and similar repositories for nanogpt-speedrun:
Users that are interested in nanogpt-speedrun are comparing it to the libraries listed below
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated 3 months ago
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆23Updated 2 months ago
- NanoGPT (124M) quality in 2.67B tokens☆28Updated last month
- Collection of autoregressive model implementation☆83Updated last month
- RWKV-7: Surpassing GPT☆82Updated 4 months ago
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆28Updated 2 weeks ago
- ☆47Updated this week
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆91Updated 2 weeks ago
- Jax like function transformation engine but micro, microjax☆30Updated 5 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆27Updated last month
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆39Updated last month
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- research impl of Native Sparse Attention (2502.11089)☆54Updated last month
- ☆19Updated this week
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆17Updated last week
- ☆63Updated 6 months ago
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆35Updated last month
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆63Updated 11 months ago
- Using FlexAttention to compute attention with different masking patterns☆42Updated 6 months ago
- Zeta implementation of a reusable and plug in and play feedforward from the paper "Exponentially Faster Language Modeling"☆15Updated 4 months ago
- Implementation of the Mamba SSM with hf_integration.☆56Updated 6 months ago
- ☆19Updated 3 weeks ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆48Updated this week
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆98Updated 3 months ago
- Simple GRPO scripts and configurations.☆58Updated last month
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- ☆21Updated 4 months ago
- Focused on fast experimentation and simplicity☆69Updated 3 months ago
- Triton Implementation of HyperAttention Algorithm☆47Updated last year