Nota-NetsPresso / SNPLinks
Structured Neuron Level Pruning to compress Transformer-based models [ECCV'24]
☆13Updated 10 months ago
Alternatives and similar repositories for SNP
Users that are interested in SNP are comparing it to the libraries listed below
Sorting:
- The official NetsPresso Python package.☆45Updated this week
- Compressed LLMs for Efficient Text Generation [ICLR'24 Workshop]☆82Updated 8 months ago
- ☆88Updated last year
- A library for training, compressing and deploying computer vision models (including ViT) with edge devices☆68Updated 2 weeks ago
- OwLite is a low-code AI model compression toolkit for AI models.☆45Updated 3 weeks ago
- ☆56Updated 2 years ago
- Official Pytorch implementation of Online Continual Learning on Class Incremental Blurry Task Configuration with Anytime Inference (ICLR …☆53Updated 2 years ago
- Reproduction of Vision Transformer in Tensorflow2. Train from scratch and Finetune.☆48Updated 3 years ago
- Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples [NeurIPS 2021]☆32Updated 3 years ago
- Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight…☆63Updated 10 months ago
- In progress.☆64Updated last year
- Ensemble Knowledge Guided Sub-network Search and Fine-tuning for Filter Pruning☆18Updated 2 years ago
- ☆100Updated last year
- It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher [CVPR 2022 Oral]☆29Updated 2 years ago
- 2022_AAAI accepted paper, NaturalInversion:Data-Free Image Synthesis Improving Real-World Consistency☆10Updated 3 years ago
- Getting GPU Util 99%☆34Updated 4 years ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆62Updated last year
- Learning Features with Parameter-Free Layers, ICLR 2022☆84Updated 2 years ago
- A performance library for machine learning applications.☆184Updated last year
- [ICLR 2023] RC-MAE☆52Updated last year
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆107Updated last month
- FrostNet: Towards Quantization-Aware Network Architecture Search