cybertronai / autograd-lib
☆33Updated 4 years ago
Alternatives and similar repositories for autograd-lib:
Users that are interested in autograd-lib are comparing it to the libraries listed below
- PyTorch-SSO: Scalable Second-Order methods in PyTorch☆145Updated last year
- Limitations of the Empirical Fisher Approximation☆47Updated 4 years ago
- Codebase for Learning Invariances in Neural Networks☆93Updated 2 years ago
- ☆156Updated 2 years ago
- Hessian spectral density estimation in TF and Jax☆121Updated 4 years ago
- Structured matrices for compressing neural networks☆66Updated last year
- This repository is no longer maintained. Check☆81Updated 4 years ago
- ☆98Updated 3 years ago
- Code for the paper: "Tensor Programs II: Neural Tangent Kernel for Any Architecture"☆104Updated 4 years ago
- Code for the article "What if Neural Networks had SVDs?", to be presented as a spotlight paper at NeurIPS 2020.☆72Updated 6 months ago
- Monotone operator equilibrium networks☆51Updated 4 years ago
- ☆29Updated 4 years ago
- Experiment code for "Randomized Automatic Differentiation"☆66Updated 4 years ago
- ☆49Updated 4 years ago
- Geometric Certifications of Neural Nets☆41Updated 2 years ago
- CHOP: An optimization library based on PyTorch, with applications to adversarial examples and structured neural network training.☆77Updated 11 months ago
- Reparameterize your PyTorch modules☆70Updated 4 years ago
- ☆36Updated 3 years ago
- Pytorch implementation of KFAC and E-KFAC (Natural Gradient).☆130Updated 5 years ago
- PyTorch implementation of Hessian Free optimisation☆43Updated 5 years ago
- Code to accompany the paper Radial Bayesian Neural Networks: Beyond Discrete Support In Large-Scale Bayesian Deep Learning☆33Updated 4 years ago
- Estimating Gradients for Discrete Random Variables by Sampling without Replacement☆40Updated 5 years ago
- paper lists and information on mean-field theory of deep learning☆75Updated 5 years ago
- Ἀνατομή is a PyTorch library to analyze representation of neural networks☆62Updated last year
- Padé Activation Units: End-to-end Learning of Activation Functions in Deep Neural Network☆64Updated 3 years ago
- Code for Self-Tuning Networks (ICLR 2019) https://arxiv.org/abs/1903.03088☆53Updated 5 years ago
- Code for the Thermodynamic Variational Objective☆26Updated 2 years ago
- Contains code for the NeurIPS 2020 paper by Pan et al., "Continual Deep Learning by FunctionalRegularisation of Memorable Past"☆44Updated 4 years ago
- DeepOBS: A Deep Learning Optimizer Benchmark Suite☆103Updated last year
- CIFAR-5m dataset☆38Updated 4 years ago