facebookresearch / schedule_free
Schedule-Free Optimization in PyTorch
☆2,098Updated 2 months ago
Alternatives and similar repositories for schedule_free:
Users that are interested in schedule_free are comparing it to the libraries listed below
- A PyTorch native library for large model training☆3,326Updated this week
- TensorDict is a pytorch dedicated tensor container.☆879Updated this week
- A JAX research toolkit for building, editing, and visualizing neural networks.☆1,731Updated 2 months ago
- PyTorch native quantization and sparsity for training and inference☆1,848Updated this week
- Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors a…☆1,283Updated this week
- Tensors, for human consumption☆1,183Updated 3 months ago
- UNet diffusion model in pure CUDA☆599Updated 7 months ago
- NanoGPT (124M) in 3 minutes☆2,294Updated this week
- Code for BLT research paper☆1,400Updated this week
- Official implementation of "Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling"☆841Updated this week
- Tile primitives for speedy kernels☆2,042Updated this week
- For optimization algorithm research and development.☆491Updated this week
- A PyTorch library for implementing flow matching algorithms, featuring continuous and discrete flow matching implementations. It includes…☆1,986Updated last month
- Puzzles for learning Triton☆1,403Updated 3 months ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆724Updated this week
- Helpful tools and examples for working with flex-attention☆635Updated this week
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,094Updated 2 months ago
- Official repository for our work on micro-budget training of large-scale diffusion models.☆1,246Updated last month
- 🚀 Efficient implementations of state-of-the-art linear attention models in Torch and Triton☆1,912Updated this week
- Train to 94% on CIFAR-10 in <6.3 seconds on a single A100. Or ~95.79% in ~110 seconds (or less!)☆1,251Updated 2 months ago
- Official Implementation of "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate"☆417Updated 2 months ago
- Structured state space sequence models☆2,553Updated 7 months ago
- 4M: Massively Multimodal Masked Modeling☆1,685Updated this week
- Cramming the training of a (BERT-type) language model into limited compute.☆1,319Updated 8 months ago
- Transform datasets at scale. Optimize datasets for fast AI model training.☆413Updated this week
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆2,583Updated last week
- nanoGPT style version of Llama 3.1☆1,316Updated 6 months ago
- What would you do with 1000 H100s...☆1,001Updated last year
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,717Updated 11 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,498Updated 3 months ago