Official Implementation of "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate"
☆437Dec 12, 2024Updated last year
Alternatives and similar repositories for adopt
Users that are interested in adopt are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆267Dec 2, 2024Updated last year
- Grams: Gradient Descent with Adaptive Momentum Scaling (ICLR 2025 Workshop)☆17Mar 6, 2025Updated last year
- Schedule-Free Optimization in PyTorch☆2,277May 21, 2025Updated 11 months ago
- ☆15Mar 2, 2025Updated last year
- For optimization algorithm research and development.☆565Apr 10, 2026Updated 3 weeks ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- The AdEMAMix Optimizer: Better, Faster, Older.☆188Sep 12, 2024Updated last year
- Getting crystal-like representations with harmonic loss☆195Apr 2, 2025Updated last year
- Efficient optimizers☆321Updated this week
- ☆310Apr 23, 2025Updated last year
- PyTorch Implementation of TecNets (Task-Embedded Control Networks)☆10Dec 8, 2022Updated 3 years ago
- Erwin: A Tree-based Hierarchical Transformer for Large-scale Physical Systems [ICML'25]☆116Oct 11, 2025Updated 6 months ago
- Train VAE like a boss☆314Oct 21, 2024Updated last year
- MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an exc…☆35Mar 9, 2026Updated last month
- ☆71Nov 15, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,185Nov 27, 2024Updated last year
- Codes accompanying the paper "LaProp: a Better Way to Combine Momentum with Adaptive Gradient"☆31Jul 30, 2020Updated 5 years ago
- Official Code for MIMETIC^2☆13Nov 19, 2024Updated last year
- ☆10May 24, 2021Updated 4 years ago
- [ICLR 2026] When it comes to optimizers, it's always better to be safe than sorry☆413Sep 26, 2025Updated 7 months ago
- DeMo: Decoupled Momentum Optimization☆201Dec 2, 2024Updated last year
- Muon is an optimizer for hidden layers in neural networks☆2,544Jan 19, 2026Updated 3 months ago
- 付録コード☆127May 6, 2024Updated 2 years ago
- Solution of Kaggle competition: MAP - Charting Student Math Misunderstandings☆27Oct 25, 2025Updated 6 months ago
- Serverless GPU API endpoints on Runpod - Get Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆137Apr 28, 2026Updated last week
- Official PyTorch Implementation for Paper "No More Adam: Learning Rate Scaling at Initialization is All You Need"☆56Jan 27, 2025Updated last year
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆457May 13, 2025Updated 11 months ago
- Triton Implementation of HyperAttention Algorithm☆48Dec 11, 2023Updated 2 years ago
- Unofficial Re-implementation of "Dream to Control: Learning Behaviors by Latent Imagination" (https://arxiv.org/abs/1912.01603 ) with PyT…☆32Aug 22, 2020Updated 5 years ago
- Helpful tools and examples for working with flex-attention☆1,182Apr 13, 2026Updated 3 weeks ago
- A library for developing deep generative models in a more concise, intuitive and extendable way☆501Sep 22, 2025Updated 7 months ago
- ☆69Mar 21, 2025Updated last year
- Write your code as tree-like expressions, then transform it☆21Jan 9, 2024Updated 2 years ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Minimal Decision Transformer Implementation written in Jax (Flax).☆18Aug 8, 2022Updated 3 years ago
- LinearKAN: A very fast implementation of Kolmogorov-Arnold Networks☆19Nov 12, 2025Updated 5 months ago
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models☆816Jun 8, 2025Updated 10 months ago
- A basic pure pytorch implementation of flash attention☆16Oct 28, 2024Updated last year
- lightweight, fast and robust columnar dataframe for data analytics with online update☆23Aug 14, 2021Updated 4 years ago
- ☆40Oct 31, 2025Updated 6 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆92Oct 30, 2024Updated last year