sowmaster / esjacobians
Implementations of the algorithms described in the paper: On the Convergence Theory for Hessian-Free Bilevel Algorithms.
☆10Updated 3 months ago
Alternatives and similar repositories for esjacobians:
Users that are interested in esjacobians are comparing it to the libraries listed below
- Code for the ICML 2021 and ICLR 2022 papers: Skew Orthogonal Convolutions, Improved deterministic l2 robustness on CIFAR-10 and CIFAR-100☆18Updated 3 years ago
- [JMLR] TRADES + random smoothing for certifiable robustness☆14Updated 4 years ago
- Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection☆21Updated 4 years ago
- ☆19Updated 4 years ago
- Training vision models with full-batch gradient descent and regularization☆37Updated 2 years ago
- This is the official implementation of the ICML 2023 paper - Can Forward Gradient Match Backpropagation ?☆12Updated last year
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 2 years ago
- [ICML2022] Training Your Sparse Neural Network Better with Any Mask. Ajay Jaiswal, Haoyu Ma, Tianlong Chen, ying Ding, and Zhangyang Wang☆27Updated 2 years ago
- ☆10Updated 6 months ago
- Code base for SRSGD.☆28Updated 4 years ago
- [NeurIPS 2020] "Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free" by Haotao Wang*, Tianlong C…☆43Updated 3 years ago
- Predicting Out-of-Distribution Error with the Projection Norm☆17Updated 2 years ago
- ICLR 2022 (Spolight): Continual Learning With Filter Atom Swapping☆15Updated last year
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated last year
- ☆19Updated 5 years ago
- Implementation of Effective Sparsification of Neural Networks with Global Sparsity Constraint☆28Updated 2 years ago
- Invariant-feature Subspace Recovery (ISR)☆23Updated 2 years ago
- ☆14Updated 3 years ago
- Experiments from "The Generalization-Stability Tradeoff in Neural Network Pruning": https://arxiv.org/abs/1906.03728.☆14Updated 4 years ago
- Code to reproduce experiments from 'Does Knowledge Distillation Really Work' a paper which appeared in the 2021 NeurIPS proceedings.☆33Updated last year
- Code for CVPR2021 paper: MOOD: Multi-level Out-of-distribution Detection☆38Updated last year
- Code Repository for the NeurIPS 2021 paper: "Self-Supervised Representation Learning on Neural Network Weights for Model Characteristic P…☆17Updated 7 months ago
- Efficient Riemannian Optimization on Stiefel Manifold via Cayley Transform☆37Updated 5 years ago
- ☆57Updated 2 years ago
- ☆36Updated 3 years ago
- Codebase for the paper "A Gradient Flow Framework for Analyzing Network Pruning"☆21Updated 4 years ago
- Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-based Models Reliable ? (ICML 2021)☆28Updated 2 years ago
- Official Code of The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks[ICML2022]☆14Updated 2 years ago
- Pytorch version of NIPS'16 "Learning to learn by gradient descent by gradient descent"☆64Updated last year