Laz4rz / mupLinks
Minimal (truly) muP implementation, consistent with TP4 and TP5 papers notation
☆14Updated 4 months ago
Alternatives and similar repositories for mup
Users that are interested in mup are comparing it to the libraries listed below
Sorting:
- Code for the paper "Function-Space Learning Rates"☆23Updated 3 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆56Updated 6 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆164Updated 3 months ago
- supporting pytorch FSDP for optimizers☆84Updated 9 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- WIP☆93Updated last year
- ☆85Updated last year
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆193Updated last year
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆19Updated 2 months ago
- ☆33Updated 8 months ago
- Collection of autoregressive model implementation☆86Updated 5 months ago
- ☆30Updated 9 months ago
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam☆85Updated last year
- ☆89Updated last year
- Official PyTorch implementation and models for paper "Diffusion Beats Autoregressive in Data-Constrained Settings". We find diffusion mod…☆95Updated last month
- H-Net Dynamic Hierarchical Architecture☆79Updated 2 weeks ago
- Universal Neurons in GPT2 Language Models☆30Updated last year
- The official github repo for "Diffusion Language Models are Super Data Learners".☆115Updated last month
- 📄Small Batch Size Training for Language Models☆62Updated this week
- ☆34Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆84Updated 10 months ago
- Unofficial Implementation of Selective Attention Transformer☆17Updated 10 months ago
- ☆82Updated last year
- σ-GPT: A New Approach to Autoregressive Models☆68Updated last year
- ☆25Updated 8 months ago
- Focused on fast experimentation and simplicity☆75Updated 9 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆80Updated 10 months ago
- ☆25Updated 4 months ago
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆84Updated 2 weeks ago
- ☆19Updated 4 months ago