VatsaDev / NanoPoorLinks
NanoGPT-speedrunning for the poor T4 enjoyers
☆72Updated 5 months ago
Alternatives and similar repositories for NanoPoor
Users that are interested in NanoPoor are comparing it to the libraries listed below
Sorting:
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆105Updated 7 months ago
- H-Net Dynamic Hierarchical Architecture☆80Updated 3 weeks ago
- An introduction to LLM Sampling☆79Updated 9 months ago
- Collection of autoregressive model implementation☆86Updated 5 months ago
- smolLM with Entropix sampler on pytorch☆150Updated 11 months ago
- Simple repository for training small reasoning models☆40Updated 8 months ago
- DeMo: Decoupled Momentum Optimization☆193Updated 10 months ago
- look how they massacred my boy☆63Updated 11 months ago
- ☆135Updated last year
- A collection of lightweight interpretability scripts to understand how LLMs think☆56Updated last week
- ☆64Updated 6 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆19Updated 2 months ago
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆84Updated 3 weeks ago
- RWKV-7: Surpassing GPT☆96Updated 10 months ago
- Simple GRPO scripts and configurations.☆59Updated 8 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 8 months ago
- working implimention of deepseek MLA☆44Updated 8 months ago
- ☆28Updated last year
- Compiling useful links, papers, benchmarks, ideas, etc.☆45Updated 6 months ago
- ☆49Updated last year
- NanoGPT (124M) quality in 2.67B tokens☆28Updated 2 weeks ago
- supporting pytorch FSDP for optimizers☆84Updated 9 months ago
- ☆102Updated 2 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆99Updated 2 months ago
- rl from zero pretrain, can it be done? yes.☆274Updated last week
- Storing long contexts in tiny caches with self-study☆194Updated 3 weeks ago
- EvaByte: Efficient Byte-level Language Models at Scale☆109Updated 5 months ago
- ☆89Updated last year
- research impl of Native Sparse Attention (2502.11089)☆61Updated 7 months ago
- ☆46Updated 6 months ago