sowmaster / esjacobiansLinks
Implementations of the algorithms described in the paper: On the Convergence Theory for Hessian-Free Bilevel Algorithms.
☆10Updated 11 months ago
Alternatives and similar repositories for esjacobians
Users that are interested in esjacobians are comparing it to the libraries listed below
Sorting:
- Code for the ICML 2021 and ICLR 2022 papers: Skew Orthogonal Convolutions, Improved deterministic l2 robustness on CIFAR-10 and CIFAR-100☆18Updated 3 years ago
- ☆14Updated 4 years ago
- Code to reproduce experiments from 'Does Knowledge Distillation Really Work' a paper which appeared in the 2021 NeurIPS proceedings.☆33Updated 2 years ago
- ☆19Updated 6 years ago
- [JMLR] TRADES + random smoothing for certifiable robustness☆14Updated 5 years ago
- ☆19Updated 4 years ago
- Code base for SRSGD.☆29Updated 5 years ago
- Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection☆21Updated 4 years ago
- ☆10Updated last year
- [ICLR 2020] ”Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference“☆24Updated 3 years ago
- Pytorch version of NIPS'16 "Learning to learn by gradient descent by gradient descent"☆67Updated 2 years ago
- This is the official implementation of the ICML 2023 paper - Can Forward Gradient Match Backpropagation ?☆12Updated 2 years ago
- Implementations of orthogonal and semi-orthogonal convolutions in the Fourier domain with applications to adversarial robustness☆47Updated 4 years ago
- PyTorch and Torch implementation for our accepted CVPR 2020 paper (Oral): Controllable Orthogonalization in Training DNNs☆25Updated 4 years ago
- Codebase for the paper "A Gradient Flow Framework for Analyzing Network Pruning"☆20Updated 4 years ago
- Paper and Code for "Curriculum Learning by Optimizing Learning Dynamics" (AISTATS 2021)☆19Updated 4 years ago
- Offical Repo for Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks. Accepted by Neurips 2020.☆34Updated 5 years ago
- Experiments from "The Generalization-Stability Tradeoff in Neural Network Pruning": https://arxiv.org/abs/1906.03728.☆14Updated 5 years ago
- "Understanding and Accelerating Neural Architecture Search with Training-Free and Theory-Grounded Metrics" by Wuyang Chen, Xinyu Gong, Yu…☆27Updated 2 years ago
- ☆28Updated 2 years ago
- Official Implementation of Convolutional Normalization: Improving Robustness and Training for Deep Neural Networks☆30Updated 3 years ago
- [NeurIPS 2020] "Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free" by Haotao Wang*, Tianlong C…☆44Updated 3 years ago
- Prospect Pruning: Finding Trainable Weights at Initialization Using Meta-Gradients☆32Updated 3 years ago
- ☆38Updated 5 years ago
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆53Updated 4 years ago
- Code for the paper "Training CNNs with Selective Allocation of Channels" (ICML 2019)☆25Updated 6 years ago
- [ICML2022] Training Your Sparse Neural Network Better with Any Mask. Ajay Jaiswal, Haoyu Ma, Tianlong Chen, ying Ding, and Zhangyang Wang☆30Updated 3 years ago
- Minimum viable code for the Decodable Information Bottleneck paper. Pytorch Implementation.☆11Updated 5 years ago
- Code for the ICML 2021 paper "Sharing Less is More: Lifelong Learning in Deep Networks with Selective Layer Transfer"☆12Updated 4 years ago
- Codes for Understanding Architectures Learnt by Cell-based Neural Architecture Search☆27Updated 5 years ago