Model-Compression / Lossless_CompressionLinks
We propose a lossless compression algorithm based on the NTK matrix for DNN. The compressed network yields asymptotically the same NTK as the original (dense and unquantized) network, with its weights and activations taking values only in {0, 1, -1} up to scaling.
☆26Updated 2 years ago
Alternatives and similar repositories for Lossless_Compression
Users that are interested in Lossless_Compression are comparing it to the libraries listed below
Sorting:
- ☆23Updated 3 years ago
- Implementation of PGONAS for CVPR22W and RD-NAS for ICASSP23☆22Updated 2 years ago
- Official PyTorch implementation of "Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets" (ICLR 2023 notable top 25%)☆26Updated last year
- Code for RepNAS☆14Updated 3 years ago
- The official project website of "NORM: Knowledge Distillation via N-to-One Representation Matching" (The paper of NORM is published in IC…☆20Updated 2 years ago
- [NeurIPS 2024] Search for Efficient LLMs☆15Updated 10 months ago
- To appear in the 11th International Conference on Learning Representations (ICLR 2023).☆18Updated 2 years ago
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆34Updated 2 years ago
- ☆25Updated 3 years ago
- The reproduce for "AM-LFS: AutoML for Loss Function Search"☆14Updated 5 years ago
- [ACL'22] Training-free Neural Architecture Search for RNNs and Transformers☆14Updated last year
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆41Updated 3 months ago
- [ICLR 2022] "Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, and No Retraining" by Lu Miao*, Xiaolong Luo*, T…☆32Updated 3 years ago
- ☆40Updated 2 years ago
- A highly modular PyTorch framework with a focus on Neural Architecture Search (NAS).☆23Updated 4 years ago
- ☆13Updated 4 years ago
- The code for Joint Neural Architecture Search and Quantization☆14Updated 6 years ago
- ☆48Updated 2 years ago
- [ICML 2022] "Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness" by Tianlong Chen*, Huan Zhang*, Zhenyu Zhang, Shiyu…☆17Updated 3 years ago
- DropIT: Dropping Intermediate Tensors for Memory-Efficient DNN Training (ICLR 2023)☆31Updated 2 years ago
- [CVPR 2022] DiSparse: Disentangled Sparsification for Multitask Model Compression☆14Updated 3 years ago
- [ICCV-2023] EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization☆28Updated 2 years ago
- ☆20Updated 2 years ago
- Recent Advances on Efficient Vision Transformers☆55Updated 2 years ago
- codes for Neural Architecture Ranker and detailed cell information datasets based on NAS-Bench series☆12Updated 3 years ago
- [CVPRW 21] "BNN - BN = ? Training Binary Neural Networks without Batch Normalization", Tianlong Chen, Zhenyu Zhang, Xu Ouyang, Zechun Liu…☆57Updated 3 years ago
- [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆77Updated 2 years ago
- Official PyTorch implementation of our ECCV 2022 paper "Sliced Recursive Transformer"☆66Updated 3 years ago
- Official Code of The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks[ICML2022]☆17Updated 3 years ago
- S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration (CVPR 2021)☆65Updated 4 years ago