maximal update parametrization (µP)
☆1,686Jul 17, 2024Updated last year
Alternatives and similar repositories for mup
Users that are interested in mup are comparing it to the libraries listed below
Sorting:
- some common Huggingface transformers in maximal update parametrization (µP)☆87Mar 14, 2022Updated 3 years ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆187Jan 19, 2026Updated last month
- PyTorch extensions for high performance and large scale training.☆3,400Apr 26, 2025Updated 10 months ago
- Schedule-Free Optimization in PyTorch☆2,256May 21, 2025Updated 9 months ago
- WIP☆94Aug 13, 2024Updated last year
- Helpful tools and examples for working with flex-attention☆1,136Feb 8, 2026Updated 2 weeks ago
- A PyTorch native platform for training generative AI models☆5,098Updated this week
- Minimalistic large language model 3D-parallelism training☆2,569Feb 19, 2026Updated last week
- Accessible large language models via k-bit quantization for PyTorch.☆7,997Updated this week
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)☆9,401Feb 20, 2026Updated last week
- Fast and memory-efficient exact attention☆22,361Updated this week
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,428Updated this week
- Structured state space sequence models☆2,854Jul 17, 2024Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,741Jan 8, 2024Updated 2 years ago
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,353Feb 20, 2026Updated last week
- ☆2,946Jan 15, 2026Updated last month
- Foundation Architecture for (M)LLMs☆3,135Apr 11, 2024Updated last year
- A high-performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch.☆2,997Feb 9, 2026Updated 2 weeks ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,513Updated this week
- Ongoing research training transformer models at scale☆15,242Feb 21, 2026Updated last week
- A library for unit scaling in PyTorch☆133Jul 11, 2025Updated 7 months ago
- functorch is JAX-like composable function transforms for PyTorch.☆1,436Aug 21, 2025Updated 6 months ago
- Repo for external large-scale work☆6,543Apr 27, 2024Updated last year
- The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”☆981Jan 30, 2024Updated 2 years ago
- Development repository for the Triton language and compiler☆18,460Feb 22, 2026Updated last week
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,395Feb 3, 2026Updated 3 weeks ago
- Scaling Data-Constrained Language Models☆342Jun 28, 2025Updated 8 months ago
- FFCV: Fast Forward Computer Vision (and other ML workloads!)☆2,986Jun 16, 2024Updated last year
- Fast and Easy Infinite Neural Networks in Python☆2,375Mar 1, 2024Updated last year
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,903Updated this week
- Tile primitives for speedy kernels☆3,183Updated this week
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆965Dec 21, 2025Updated 2 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆693Jan 26, 2026Updated last month
- Optax is a gradient processing and optimization library for JAX.☆2,193Updated this week
- Implementation of https://srush.github.io/annotated-s4☆512Jun 20, 2025Updated 8 months ago
- higher is a pytorch library allowing users to obtain higher order gradients over losses spanning training loops rather than individual tr…☆1,627Mar 25, 2022Updated 3 years ago
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,648Updated this week
- Efficient Triton Kernels for LLM Training☆6,162Updated this week
- ☆292Jul 15, 2024Updated last year