VatsaDev / NanoPoorLinks
NanoGPT-speedrunning for the poor T4 enjoyers
☆73Updated 8 months ago
Alternatives and similar repositories for NanoPoor
Users that are interested in NanoPoor are comparing it to the libraries listed below
Sorting:
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆109Updated 10 months ago
- Collection of autoregressive model implementation☆85Updated this week
- H-Net Dynamic Hierarchical Architecture☆80Updated 4 months ago
- MoE training for Me and You and maybe other people☆319Updated last week
- A collection of lightweight interpretability scripts to understand how LLMs think☆88Updated 2 weeks ago
- DeMo: Decoupled Momentum Optimization☆198Updated last year
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆18Updated 5 months ago
- ☆27Updated last year
- working implimention of deepseek MLA☆45Updated last year
- smolLM with Entropix sampler on pytorch☆149Updated last year
- ☆50Updated last year
- ☆137Updated last year
- NanoGPT (124M) quality in 2.67B tokens☆28Updated 3 months ago
- supporting pytorch FSDP for optimizers☆84Updated last year
- Simple GRPO scripts and configurations.☆59Updated 11 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 2 months ago
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆86Updated 4 months ago
- RWKV-7: Surpassing GPT☆103Updated last year
- An introduction to LLM Sampling☆79Updated last year
- rl from zero pretrain, can it be done? yes.☆286Updated 3 months ago
- look how they massacred my boy☆63Updated last year
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆126Updated 3 months ago
- ☆108Updated 5 months ago
- ☆65Updated 9 months ago
- Storing long contexts in tiny caches with self-study☆229Updated last month
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for tr…☆66Updated last month
- LLM training in simple, raw C/CUDA☆15Updated last year
- 📄Small Batch Size Training for Language Models☆79Updated 3 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆306Updated last month
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 9 months ago