snu-mllab / Efficient-CNN-Depth-CompressionLinks
Official PyTorch implementation of "Efficient Latency-Aware CNN Depth Compression via Two-Stage Dynamic Programming" (ICML'23)
☆13Updated last year
Alternatives and similar repositories for Efficient-CNN-Depth-Compression
Users that are interested in Efficient-CNN-Depth-Compression are comparing it to the libraries listed below
Sorting:
- ☆13Updated 2 years ago
 - [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆33Updated 2 years ago
 - ☆47Updated 2 years ago
 - ☆25Updated 3 years ago
 - ☆22Updated 3 years ago
 - In progress.☆66Updated last year
 - ☆13Updated last year
 - Official implementation for ECCV 2022 paper LIMPQ - "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance"☆62Updated 2 years ago
 - Pytorch implementation of our paper accepted by CVPR 2022 -- IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Sh…☆33Updated 3 years ago
 - [NeurIPS 2024] Search for Efficient LLMs☆15Updated 9 months ago
 - [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆39Updated last year
 - [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"☆34Updated last year
 - [ICML 2022] "DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks", by Yonggan …☆72Updated 3 years ago
 - [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆30Updated last year
 - [NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"☆74Updated 2 years ago
 - [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆101Updated 2 years ago
 - [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Updated last year
 - [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆77Updated 2 years ago
 - This is the pytorch implementation for the paper: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation, which is…☆24Updated 4 years ago
 - torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆22Updated last year
 - [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆40Updated last month
 - [ICLR 2024] The Need for Speed: Pruning Transformers with One Recipe☆30Updated last year
 - This is the official PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".☆43Updated 3 years ago
 - [CVPR'23] SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer☆75Updated last year
 - The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆65Updated 7 months ago
 - Page for the CVPR 2023 Tutorial - Efficient Neural Networks: From Algorithm Design to Practical Mobile Deployments☆12Updated 2 years ago
 - [ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Di…☆66Updated last year
 - super-resolution; post-training quantization; model compression☆13Updated last year
 - [ECCV 2024] Isomorphic Pruning for Vision Models☆79Updated last year
 - [ECCV24] MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization☆14Updated 11 months ago