thepowerfuldeez / sample_efficient_gptLinks
Training framework with a goal to explore the frontier of sample efficiency of small language models
☆97Updated 2 weeks ago
Alternatives and similar repositories for sample_efficient_gpt
Users that are interested in sample_efficient_gpt are comparing it to the libraries listed below
Sorting:
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆110Updated 11 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 9 months ago
- A collection of lightweight interpretability scripts to understand how LLMs think☆89Updated 2 weeks ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆186Updated 3 weeks ago
- ☆92Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- rl from zero pretrain, can it be done? yes.☆286Updated 4 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆307Updated 2 months ago
- MoE training for Me and You and maybe other people☆335Updated last month
- EvaByte: Efficient Byte-level Language Models at Scale☆115Updated 9 months ago
- Collection of autoregressive model implementation☆85Updated 3 weeks ago
- 📄Small Batch Size Training for Language Models☆80Updated 4 months ago
- Simple GRPO scripts and configurations.☆59Updated last year
- Simple repository for training small reasoning models☆49Updated last year
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for tr…☆66Updated 2 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆175Updated last year
- Exploring Applications of GRPO☆251Updated 5 months ago
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆150Updated 4 months ago
- Compiling useful links, papers, benchmarks, ideas, etc.☆46Updated 10 months ago
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆86Updated 4 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆137Updated last year
- Normalized Transformer (nGPT)☆198Updated last year
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆128Updated 4 months ago
- ☆91Updated last year
- ☆59Updated 2 months ago
- ☆27Updated last year
- ☆48Updated last year
- Storing long contexts in tiny caches with self-study☆236Updated 2 months ago
- DeMo: Decoupled Momentum Optimization☆198Updated last year
- Supporting code for the blog post on modular manifolds.☆115Updated 4 months ago