Laz4rz / mupLinks
Minimal (truly) muP implementation, consistent with TP4 and TP5 papers notation
☆14Updated 4 months ago
Alternatives and similar repositories for mup
Users that are interested in mup are comparing it to the libraries listed below
Sorting:
- Code for the paper "Function-Space Learning Rates"☆23Updated 4 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆56Updated 7 months ago
- WIP☆93Updated last year
- supporting pytorch FSDP for optimizers☆83Updated 10 months ago
- ☆34Updated last year
- The official github repo for "Diffusion Language Models are Super Data Learners".☆134Updated 2 weeks ago
- H-Net Dynamic Hierarchical Architecture☆80Updated last month
- Official PyTorch implementation and models for paper "Diffusion Beats Autoregressive in Data-Constrained Settings". We find diffusion mod…☆101Updated last month
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆166Updated 3 months ago
- ☆24Updated 4 months ago
- Focused on fast experimentation and simplicity☆75Updated 9 months ago
- ☆91Updated last year
- 📄Small Batch Size Training for Language Models☆63Updated 2 weeks ago
- ☆33Updated 9 months ago
- [ICML 2025] Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction☆72Updated 4 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆84Updated 11 months ago
- ☆19Updated 5 months ago
- ☆18Updated 11 months ago
- σ-GPT: A New Approach to Autoregressive Models☆68Updated last year
- Universal Neurons in GPT2 Language Models☆30Updated last year
- A basic pure pytorch implementation of flash attention☆16Updated 11 months ago
- ☆85Updated last year
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 5 months ago
- ☆23Updated last year
- ☆53Updated last year
- Code accompanying the paper "Generalized Interpolating Discrete Diffusion"☆103Updated 4 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆192Updated last year
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆85Updated last month
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆147Updated 2 weeks ago