colehawkins / bayesian-tensor-rank-determination
β12Updated 3 years ago
Alternatives and similar repositories for bayesian-tensor-rank-determination:
Users that are interested in bayesian-tensor-rank-determination are comparing it to the libraries listed below
- π A curated list of tensor decomposition resources for model compression.β59Updated this week
- TedNet: A Pytorch Toolkit for Tensor Decomposition Networksβ95Updated 3 years ago
- Efficient Riemannian Optimization on Stiefel Manifold via Cayley Transformβ38Updated 5 years ago
- Code for the ICML 2021 and ICLR 2022 papers: Skew Orthogonal Convolutions, Improved deterministic l2 robustness on CIFAR-10 and CIFAR-100β18Updated 3 years ago
- Good Subnetworks Provably Exist: Pruning via Greedy Forward Selectionβ21Updated 4 years ago
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, Deβ¦β45Updated last year
- Neuron Merging: Compensating for Pruned Neurons (NeurIPS 2020)β43Updated 4 years ago
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)β49Updated 4 years ago
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)β30Updated 5 months ago
- [ICML2022] Training Your Sparse Neural Network Better with Any Mask. Ajay Jaiswal, Haoyu Ma, Tianlong Chen, ying Ding, and Zhangyang Wangβ27Updated 2 years ago
- Implementation of Continuous Sparsification, a method for pruning and ticket search in deep networksβ33Updated 2 years ago
- Official implementation for the paper "Controlled Sparsity via Constrained Optimization"β10Updated 2 years ago
- Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation (ICML'24 Oral)β14Updated 8 months ago
- code to show F-Principle in the DNN trainingβ58Updated 2 years ago
- β11Updated 2 years ago
- [Neurips 2021] Sparse Training via Boosting Pruning Plasticity with Neuroregenerationβ31Updated 2 years ago
- Code for testing DCT plus Sparse (DCTpS) networksβ14Updated 3 years ago
- Spectral Tensor Train Parameterization of Deep Learning Layersβ15Updated 3 years ago
- β35Updated 2 years ago
- Prospect Pruning: Finding Trainable Weights at Initialization Using Meta-Gradientsβ31Updated 3 years ago
- β47Updated 5 years ago
- Code for Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpotβ42Updated 4 years ago
- [ICLR2023] NTK-SAP: Improving neural network pruning by aligning training dynamicsβ18Updated last year
- [ICLR 2022] "Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, and No Retraining" by Lu Miao*, Xiaolong Luo*, Tβ¦β29Updated 3 years ago
- Implementations of the algorithms described in the paper: On the Convergence Theory for Hessian-Free Bilevel Algorithms.β10Updated 5 months ago
- Implicit networks can be trained efficiently and simply by using Jacobian-free Backprop (JFB).β35Updated 3 years ago
- Lightweight torch implementation of rigl, a sparse-to-sparse optimizer.β56Updated 3 years ago
- β35Updated 3 years ago
- [ICLR 2023] "Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!" Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chenβ¦β28Updated last year
- [NeurIPS 2021] code for "Taxonomizing local versus global structure in neural network loss landscapes" https://arxiv.org/abs/2107.11228β19Updated 3 years ago