Model-Compression / Lossless_CompressionLinks
We propose a lossless compression algorithm based on the NTK matrix for DNN. The compressed network yields asymptotically the same NTK as the original (dense and unquantized) network, with its weights and activations taking values only in {0, 1, -1} up to scaling.
☆26Updated last year
Alternatives and similar repositories for Lossless_Compression
Users that are interested in Lossless_Compression are comparing it to the libraries listed below
Sorting:
- ☆22Updated 2 years ago
- Implementation of PGONAS for CVPR22W and RD-NAS for ICASSP23☆22Updated 2 years ago
- The official project website of "NORM: Knowledge Distillation via N-to-One Representation Matching" (The paper of NORM is published in IC…☆20Updated 2 years ago
- [ACL'22] Training-free Neural Architecture Search for RNNs and Transformers☆13Updated last year
- [NeurIPS 2024] Search for Efficient LLMs☆15Updated 8 months ago
- [ICLR 2022] "Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, and No Retraining" by Lu Miao*, Xiaolong Luo*, T…☆32Updated 3 years ago
- The reproduce for "AM-LFS: AutoML for Loss Function Search"☆14Updated 5 years ago
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆34Updated 2 years ago
- Code for RepNAS☆13Updated 3 years ago
- Official PyTorch implementation of "Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets" (ICLR 2023 notable top 25%)☆24Updated last year
- ☆40Updated last year
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆40Updated 2 weeks ago
- [CVPR 2022] DiSparse: Disentangled Sparsification for Multitask Model Compression☆14Updated 3 years ago
- To appear in the 11th International Conference on Learning Representations (ICLR 2023).☆18Updated 2 years ago
- A highly modular PyTorch framework with a focus on Neural Architecture Search (NAS).☆23Updated 3 years ago
- Official Pytorch implementation of Super Vision Transformer (IJCV)☆43Updated 2 years ago
- ☆26Updated 3 years ago
- [ICML 2022] "Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness" by Tianlong Chen*, Huan Zhang*, Zhenyu Zhang, Shiyu…☆17Updated 3 years ago
- codes for Neural Architecture Ranker and detailed cell information datasets based on NAS-Bench series☆12Updated 3 years ago
- ☆47Updated 2 years ago
- DropIT: Dropping Intermediate Tensors for Memory-Efficient DNN Training (ICLR 2023)☆31Updated 2 years ago
- [ICML 2022] "DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks", by Yonggan …☆72Updated 3 years ago
- Official PyTorch implementation of our ECCV 2022 paper "Sliced Recursive Transformer"☆66Updated 3 years ago
- Official implementation for paper "Relational Surrogate Loss Learning", ICLR 2022☆36Updated 2 years ago
- ☆26Updated last year
- [NeurIPS 2021] “Stronger NAS with Weaker Predictors“, Junru Wu, Xiyang Dai, Dongdong Chen, Yinpeng Chen, Mengchen Liu, Ye Yu, Zhangyang W…☆27Updated 3 years ago
- [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆76Updated 2 years ago
- Code for ICML 2022 paper "SPDY: Accurate Pruning with Speedup Guarantees"☆20Updated 2 years ago
- PyTorch implementation of MLP-Mixer☆37Updated 4 years ago
- Auto-Prox-AAAI24☆13Updated last year