maximal update parametrization (µP)
☆1,690Jul 17, 2024Updated last year
Alternatives and similar repositories for mup
Users that are interested in mup are comparing it to the libraries listed below
Sorting:
- some common Huggingface transformers in maximal update parametrization (µP)☆87Mar 14, 2022Updated 4 years ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆190Jan 19, 2026Updated 2 months ago
- WIP☆94Aug 13, 2024Updated last year
- PyTorch extensions for high performance and large scale training.☆3,403Apr 26, 2025Updated 10 months ago
- Schedule-Free Optimization in PyTorch☆2,265May 21, 2025Updated 9 months ago
- Helpful tools and examples for working with flex-attention☆1,157Feb 8, 2026Updated last month
- A PyTorch native platform for training generative AI models☆5,162Updated this week
- Fast and memory-efficient exact attention☆22,832Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆8,052Updated this week
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)☆9,430Feb 20, 2026Updated last month
- Minimalistic large language model 3D-parallelism training☆2,617Feb 19, 2026Updated last month
- A library for unit scaling in PyTorch☆133Jul 11, 2025Updated 8 months ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,630Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,373Updated this week
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,739Jan 8, 2024Updated 2 years ago
- The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”☆986Jan 30, 2024Updated 2 years ago
- A high-performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch.☆3,022Feb 9, 2026Updated last month
- functorch is JAX-like composable function transforms for PyTorch.☆1,437Aug 21, 2025Updated 7 months ago
- Ongoing research training transformer models at scale☆15,744Updated this week
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆62May 11, 2021Updated 4 years ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,563Updated this week
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam☆86Jul 28, 2024Updated last year
- ☆2,952Mar 9, 2026Updated last week
- Foundation Architecture for (M)LLMs☆3,135Apr 11, 2024Updated last year
- Structured state space sequence models☆2,869Jul 17, 2024Updated last year
- Development repository for the Triton language and compiler☆18,656Mar 14, 2026Updated last week
- Fast and Easy Infinite Neural Networks in Python☆2,377Mar 1, 2024Updated 2 years ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,956Updated this week
- ☆306Jul 15, 2024Updated last year
- Scaling Data-Constrained Language Models☆342Jun 28, 2025Updated 8 months ago
- Repo for external large-scale work☆6,542Apr 27, 2024Updated last year
- Tile primitives for speedy kernels☆3,232Updated this week
- FFCV: Fast Forward Computer Vision (and other ML workloads!)☆2,986Jun 16, 2024Updated last year
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆976Mar 6, 2026Updated 2 weeks ago
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,400Feb 3, 2026Updated last month
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,807Mar 13, 2026Updated last week
- ☆1,262Jul 30, 2024Updated last year
- Optax is a gradient processing and optimization library for JAX.☆2,212Updated this week
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆1,010Jul 29, 2024Updated last year