kach / gradient-descent-the-ultimate-optimizerLinks
Code for our NeurIPS 2022 paper
☆369Updated 2 years ago
Alternatives and similar repositories for gradient-descent-the-ultimate-optimizer
Users that are interested in gradient-descent-the-ultimate-optimizer are comparing it to the libraries listed below
Sorting:
- ☆780Updated last month
- Cockpit: A Practical Debugging Tool for Training Deep Neural Networks☆480Updated 3 years ago
- Named tensors with first-class dimensions for PyTorch☆332Updated 2 years ago
- A library to inspect and extract intermediate layers of PyTorch models.☆473Updated 3 years ago
- This library would form a permanent home for reusable components for deep probabilistic programming. The library would form and harness a…☆306Updated 3 weeks ago
- Laplace approximations for Deep Learning.☆514Updated 2 months ago
- BackPACK - a backpropagation package built on top of PyTorch which efficiently computes quantities other than the gradient.☆588Updated 6 months ago
- Load tensorboard event logs as pandas DataFrames for scientific plotting; Supports both PyTorch and TensorFlow☆199Updated 11 months ago
- Unofficial JAX implementations of deep learning research papers☆156Updated 3 years ago
- Code release for "Git Re-Basin: Merging Models modulo Permutation Symmetries"☆482Updated 2 years ago
- Constrained optimization toolkit for PyTorch☆687Updated 3 years ago
- Implementation of https://srush.github.io/annotated-s4☆498Updated 3 weeks ago
- Tensors, for human consumption☆1,268Updated 3 weeks ago
- Helps you write algorithms in PyTorch that adapt to the available (CUDA) memory☆438Updated 10 months ago
- Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions☆258Updated last year
- TorchOpt is an efficient library for differentiable optimization built upon PyTorch.☆605Updated last week
- Compositional Linear Algebra☆478Updated last month
- D-Adaptation for SGD, Adam and AdaGrad☆523Updated 5 months ago
- Library for Jacobian descent with PyTorch. It enables the optimization of neural networks with multiple losses (e.g. multi-task learning)…☆251Updated this week
- Betty: an automatic differentiation library for generalized meta-learning and multilevel optimization☆338Updated last year
- ASDL: Automatic Second-order Differentiation Library for PyTorch☆188Updated 7 months ago
- functorch is JAX-like composable function transforms for PyTorch.☆1,432Updated last week
- Implementation of the Adan (ADAptive Nesterov momentum algorithm) Optimizer in Pytorch☆252Updated 2 years ago
- Pretrained deep learning models for Jax/Flax: StyleGAN2, GPT2, VGG, ResNet, etc.☆254Updated 3 months ago
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆388Updated this week
- Optimal transport tools implemented with the JAX framework, to solve large scale matching problems of any flavor.☆614Updated last week
- TensorDict is a pytorch dedicated tensor container.☆937Updated last week
- ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning☆277Updated 2 years ago
- Normalizing flows in PyTorch☆396Updated last month
- Use Jax functions in Pytorch☆244Updated 2 years ago