BlackHC / neural_net_checklistLinks
☆150Updated 11 months ago
Alternatives and similar repositories for neural_net_checklist
Users that are interested in neural_net_checklist are comparing it to the libraries listed below
Sorting:
- An implementation of PSGD Kron second-order optimizer for PyTorch☆92Updated 3 months ago
- TensorHue is a Python library that allows you to visualize tensors right in your console, making understanding and debugging tensor conte…☆118Updated 4 months ago
- ☆197Updated 7 months ago
- Diffusion models in PyTorch☆106Updated 3 weeks ago
- Exca - Execution and caching tool for python☆87Updated this week
- 🧱 Modula software package☆204Updated 3 months ago
- The boundary of neural network trainability is fractal☆210Updated last year
- Getting crystal-like representations with harmonic loss☆191Updated 3 months ago
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆123Updated 11 months ago
- supporting pytorch FSDP for optimizers☆82Updated 7 months ago
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆179Updated last month
- ☆273Updated last year
- Efficient optimizers☆232Updated last week
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆140Updated last month
- The AdEMAMix Optimizer: Better, Faster, Older.☆183Updated 10 months ago
- Universal Tensor Operations in Einstein-Inspired Notation for Python.☆385Updated 3 months ago
- ☆303Updated last year
- For optimization algorithm research and development.☆521Updated this week
- Uncertainty quantification with PyTorch☆362Updated 3 months ago
- ☆110Updated last month
- A State-Space Model with Rational Transfer Function Representation.☆79Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆101Updated 6 months ago
- Parameter-Free Optimizers for Pytorch☆130Updated last year
- Library for Jacobian descent with PyTorch. It enables the optimization of neural networks with multiple losses (e.g. multi-task learning)…☆251Updated this week
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆129Updated last year
- ☆81Updated last year
- ☆61Updated 8 months ago
- ☆79Updated last year
- A simple implimentation of Bayesian Flow Networks (BFN)☆240Updated last year
- ☆65Updated 2 years ago