☆11Jan 10, 2025Updated last year
Alternatives and similar repositories for quant_horizon
Users that are interested in quant_horizon are comparing it to the libraries listed below
Sorting:
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Dec 17, 2023Updated 2 years ago
- NART = NART is not A RunTime, a deep learning inference framework.☆37Mar 2, 2023Updated 2 years ago
- Offline Quantization Tools for Deploy.☆142Dec 28, 2023Updated 2 years ago
- ☆13Feb 16, 2022Updated 4 years ago
- An Tensorflow.keras implementation of Same, Same But Different - Recovering Neural Network Quantization Error Through Weight Factorizatio…☆10Dec 18, 2019Updated 6 years ago
- ☆28Dec 2, 2024Updated last year
- A tool for model sparse based on torch.fx☆13Jun 3, 2024Updated last year
- [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"☆37Aug 20, 2024Updated last year
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆50Oct 21, 2023Updated 2 years ago
- Model Quantization Benchmark☆18Sep 30, 2025Updated 4 months ago
- High Performance FP8 GEMM Kernels for SM89 and later GPUs.☆20Jan 24, 2025Updated last year
- This is a repository of Binary General Matrix Multiply (BGEMM) by customized CUDA kernel. Thank FP6-LLM for the wheels!☆18Aug 30, 2024Updated last year
- AAAI2023 Efficient and Accurate Models towards Practical Deep Learning Baseline☆13Nov 29, 2022Updated 3 years ago
- [EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs, and video generative models.☆680Nov 19, 2025Updated 3 months ago
- The first comprehensive Robustness investigation benchmark on large-scale dataset ImageNet regarding ARchitecture design and Training tec…☆150Feb 4, 2026Updated 3 weeks ago
- ☆18Nov 29, 2023Updated 2 years ago
- PyTorch implementation of Near-Lossless Post-Training Quantization of Deep Neural Networks via a Piecewise Linear Approximation☆23Feb 17, 2020Updated 6 years ago
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Jul 21, 2023Updated 2 years ago
- Code for "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" (ICLR 2024)☆20Feb 16, 2024Updated 2 years ago
- ☆19Mar 16, 2022Updated 3 years ago
- ☆26Apr 12, 2022Updated 3 years ago
- Revisiting Parameter Sharing for Automatic Neural Channel Number Search, NeurIPS 2020☆22Nov 15, 2020Updated 5 years ago
- A simple pytorch implementation of Differentiable Architecture Search (DARTS)☆22Aug 27, 2019Updated 6 years ago
- ☆21Feb 11, 2022Updated 4 years ago
- A framework to compare low-bit integer and float-point formats☆66Feb 6, 2026Updated 3 weeks ago
- This is the pytorch implementation for the paper: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation, which is…☆24Aug 17, 2021Updated 4 years ago
- AFPQ code implementation☆23Nov 6, 2023Updated 2 years ago
- [CVPR 2024 Highlight & TPAMI 2025] This is the official PyTorch implementation of "TFMQ-DM: Temporal Feature Maintenance Quantization for…☆108Sep 29, 2025Updated 4 months ago
- Official implementation for ECCV 2022 paper LIMPQ - "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance"☆61Mar 19, 2023Updated 2 years ago
- The official implementation of the NeurIPS 2022 paper Q-ViT.☆106May 22, 2023Updated 2 years ago
- [NeurIPS 2021] “Stronger NAS with Weaker Predictors“, Junru Wu, Xiyang Dai, Dongdong Chen, Yinpeng Chen, Mengchen Liu, Ye Yu, Zhangyang W…☆27Sep 23, 2022Updated 3 years ago
- [ICML 2023] This project is the official implementation of our accepted ICML 2023 paper BiBench: Benchmarking and Analyzing Network Binar…☆56Mar 4, 2024Updated last year
- PyTorch implementation of BigNAS: Scaling Up Neural Architecture Search with Big Single-Stage Models☆29Aug 22, 2022Updated 3 years ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆31Mar 12, 2024Updated last year
- [ICCV 2021] Code release for "Sub-bit Neural Networks: Learning to Compress and Accelerate Binary Neural Networks"☆32Jul 24, 2022Updated 3 years ago
- ☆25Dec 11, 2021Updated 4 years ago
- [ICML 2022] "DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks", by Yonggan …☆73Jul 7, 2022Updated 3 years ago
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆41Sep 9, 2025Updated 5 months ago
- Implementation of ICLR 2018 paper "Loss-aware Weight Quantization of Deep Networks"☆27Oct 24, 2019Updated 6 years ago