iShohei220 / adoptLinks
Official Implementation of "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate"
☆426Updated 6 months ago
Alternatives and similar repositories for adopt
Users that are interested in adopt are comparing it to the libraries listed below
Sorting:
- The AdEMAMix Optimizer: Better, Faster, Older.☆183Updated 9 months ago
- For optimization algorithm research and development.☆521Updated this week
- Getting crystal-like representations with harmonic loss☆190Updated 2 months ago
- Efficient optimizers☆220Updated this week
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆251Updated 3 months ago
- ☆150Updated 10 months ago
- Universal Tensor Operations in Einstein-Inspired Notation for Python.☆381Updated 2 months ago
- TensorHue is a Python library that allows you to visualize tensors right in your console, making understanding and debugging tensor conte…☆117Updated 4 months ago
- Library for Jacobian descent with PyTorch. It enables the optimization of neural networks with multiple losses (e.g. multi-task learning)…☆246Updated last week
- When it comes to optimizers, it's always better to be safe than sorry☆241Updated 2 months ago
- Annotated version of the Mamba paper☆485Updated last year
- Implementation of Diffusion Transformer (DiT) in JAX☆278Updated last year
- Scalable and Performant Data Loading☆278Updated this week
- ☆190Updated 6 months ago
- An implementation of PSGD Kron second-order optimizer for PyTorch☆91Updated 2 months ago
- Simple and readable code for training and sampling from diffusion models☆509Updated last week
- Kolmogorov-Arnold Networks (KAN) using Chebyshev polynomials instead of B-splines.☆380Updated last year
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆122Updated 10 months ago
- Official repository for the paper "Grokfast: Accelerated Grokking by Amplifying Slow Gradients"☆555Updated 11 months ago
- TensorDict is a pytorch dedicated tensor container.☆929Updated this week
- A practical implementation of GradNorm, Gradient Normalization for Adaptive Loss Balancing, in Pytorch☆97Updated last year
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆285Updated 3 weeks ago
- Schedule-Free Optimization in PyTorch☆2,180Updated last month
- Annotated Flow Matching paper☆188Updated 9 months ago
- Access to free kaggle compute power from your command line☆29Updated last year
- The boundary of neural network trainability is fractal☆208Updated last year
- Code and weights for the paper "Cluster and Predict Latents Patches for Improved Masked Image Modeling"☆110Updated 2 months ago
- A simple implimentation of Bayesian Flow Networks (BFN)☆240Updated last year
- ☆303Updated last year
- Interactively inspect module inputs, outputs, parameters, and gradients.☆341Updated last month