Model-Compression / Lossless_CompressionLinks
We propose a lossless compression algorithm based on the NTK matrix for DNN. The compressed network yields asymptotically the same NTK as the original (dense and unquantized) network, with its weights and activations taking values only in {0, 1, -1} up to scaling.
☆26Updated 2 years ago
Alternatives and similar repositories for Lossless_Compression
Users that are interested in Lossless_Compression are comparing it to the libraries listed below
Sorting:
- ☆23Updated 3 years ago
- Implementation of PGONAS for CVPR22W and RD-NAS for ICASSP23☆23Updated 2 years ago
- The reproduce for "AM-LFS: AutoML for Loss Function Search"☆14Updated 5 years ago
- Official PyTorch implementation of "Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets" (ICLR 2023 notable top 25%)☆26Updated last year
- To appear in the 11th International Conference on Learning Representations (ICLR 2023).☆18Updated 2 years ago
- The official project website of "NORM: Knowledge Distillation via N-to-One Representation Matching" (The paper of NORM is published in IC…☆20Updated 2 years ago
- [NeurIPS 2024] Search for Efficient LLMs☆16Updated last year
- [ACL'22] Training-free Neural Architecture Search for RNNs and Transformers☆14Updated last year
- The code for Joint Neural Architecture Search and Quantization☆14Updated 6 years ago
- Code for RepNAS☆14Updated 4 years ago
- ☆25Updated 4 years ago
- codes for Neural Architecture Ranker and detailed cell information datasets based on NAS-Bench series☆12Updated 3 years ago
- [CVPR 2022] DiSparse: Disentangled Sparsification for Multitask Model Compression☆14Updated 3 years ago
- ☆20Updated 2 years ago
- Official Pytorch implementation of Super Vision Transformer (IJCV)☆43Updated 2 years ago
- ☆48Updated 2 years ago
- Official implementation for "SimA: Simple Softmax-free Attention for Vision Transformers"☆45Updated last year
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆41Updated 4 months ago
- ☆28Updated 2 years ago
- [ICLR 2022] "Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, and No Retraining" by Lu Miao*, Xiaolong Luo*, T…☆33Updated 4 years ago
- S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration (CVPR 2021)☆65Updated 4 years ago
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆34Updated 2 years ago
- [ECCV2022] Revisiting the Critical Factors of Augmentation-Invariant Representation Learning☆12Updated 3 years ago
- [ICML 2022] "Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness" by Tianlong Chen*, Huan Zhang*, Zhenyu Zhang, Shiyu…☆17Updated 3 years ago
- [ICCV 2023] Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep Neural Networks☆25Updated 2 years ago
- Revisiting Parameter Sharing for Automatic Neural Channel Number Search, NeurIPS 2020☆22Updated 5 years ago
- Pytorch implementation (TPAMI 2023) - Training Compact CNNs for Image Classification using Dynamic-coded Filter Fusion☆18Updated 3 years ago
- Self-Distribution BNN☆10Updated 3 years ago
- DropIT: Dropping Intermediate Tensors for Memory-Efficient DNN Training (ICLR 2023)☆32Updated 2 years ago
- [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆77Updated 3 years ago