facebookresearch / schedule_freeView external linksLinks
Schedule-Free Optimization in PyTorch
☆2,256May 21, 2025Updated 8 months ago
Alternatives and similar repositories for schedule_free
Users that are interested in schedule_free are comparing it to the libraries listed below
Sorting:
- A PyTorch native platform for training generative AI models☆5,045Updated this week
- Efficient optimizers☆283Dec 20, 2025Updated last month
- TensorDict is a pytorch dedicated tensor container.☆1,003Feb 6, 2026Updated last week
- Efficient Triton Kernels for LLM Training☆6,123Feb 7, 2026Updated last week
- For optimization algorithm research and development.☆558Jan 12, 2026Updated last month
- Tile primitives for speedy kernels☆3,139Updated this week
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆944Nov 16, 2025Updated 2 months ago
- PyTorch native post-training library☆5,669Updated this week
- maximal update parametrization (µP)☆1,673Jul 17, 2024Updated last year
- Meta Lingua: a lean, efficient, and easy-to-hack codebase to research LLMs.☆4,754Jul 18, 2025Updated 6 months ago
- PyTorch native quantization and sparsity for training and inference☆2,668Updated this week
- Muon is an optimizer for hidden layers in neural networks☆2,290Jan 19, 2026Updated 3 weeks ago
- Helpful tools and examples for working with flex-attention☆1,127Updated this week
- Minimalistic large language model 3D-parallelism training☆2,544Dec 11, 2025Updated 2 months ago
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,184Nov 27, 2024Updated last year
- A JAX research toolkit for building, editing, and visualizing neural networks.☆1,863Jun 22, 2025Updated 7 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,672Oct 28, 2024Updated last year
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆595Aug 12, 2025Updated 6 months ago
- ☆250Dec 2, 2024Updated last year
- NanoGPT (124M) in 2 minutes☆4,589Feb 1, 2026Updated last week
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆452May 13, 2025Updated 9 months ago
- A PyTorch library for implementing flow matching algorithms, featuring continuous and discrete flow matching implementations. It includes…☆4,107Jan 5, 2026Updated last month
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)☆9,395Jan 26, 2026Updated 2 weeks ago
- Mamba SSM architecture☆17,153Jan 12, 2026Updated last month
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,379Updated this week
- PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily wri…☆1,440Feb 3, 2026Updated last week
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,183Aug 22, 2025Updated 5 months ago
- Fast and memory-efficient exact attention☆22,231Updated this week
- The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”☆981Jan 30, 2024Updated 2 years ago
- Minimal implementation of scalable rectified flow transformers, based on SD3's approach☆632Jul 1, 2024Updated last year
- Implementation of Diffusion Transformer (DiT) in JAX☆306Jun 11, 2024Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆7,939Jan 22, 2026Updated 3 weeks ago
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,336Feb 5, 2026Updated last week
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,355May 19, 2025Updated 8 months ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,491Feb 6, 2026Updated last week
- Official repository for our work on micro-budget training of large-scale diffusion models.☆1,548Jan 12, 2025Updated last year
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,800Updated this week
- Minimalistic 4D-parallelism distributed training framework for education purpose☆2,076Aug 26, 2025Updated 5 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,885Updated this week