This repo contains the code for studying the interplay between quantization and sparsity methods
☆26Feb 26, 2025Updated last year
Alternatives and similar repositories for quantization-sparsity-interplay
Users that are interested in quantization-sparsity-interplay are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆21Oct 2, 2024Updated last year
- The official implementation of the DAC 2024 paper GQA-LUT☆22Dec 20, 2024Updated last year
- ☆19Mar 21, 2023Updated 3 years ago
- A bit-level sparsity-awared multiply-accumulate process element.☆18Jul 9, 2024Updated last year
- [NeurIPS 2024] AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models☆33Jun 9, 2025Updated 10 months ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆68Mar 27, 2025Updated last year
- [ICCV-2023] EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization☆28Dec 6, 2023Updated 2 years ago
- [ICLR2026] The first W4A4KV4 quantized + 50% sparse LLMs!☆26Jan 26, 2026Updated 2 months ago
- ☆119Nov 17, 2023Updated 2 years ago
- ☆22Oct 25, 2024Updated last year
- ☆30Jul 22, 2024Updated last year
- [NeurIPS 2025] Official Implementation for "Enhancing Vision-Language Model Reliability with Uncertainty-Guided Dropout Decoding"☆22Dec 8, 2024Updated last year
- ☆11Jun 4, 2024Updated last year
- Official implementation of the ICLR'25 paper "QERA: an Analytical Framework for Quantization Error Reconstruction".☆14Feb 4, 2025Updated last year
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated 2 years ago
- ☆13Jul 3, 2024Updated last year
- MICRO 2023 Evaluation Artifact for TeAAL☆10Oct 26, 2023Updated 2 years ago
- [ICLR 2025] RaSA: Rank-Sharing Low-Rank Adaptation☆10May 19, 2025Updated 10 months ago
- LLM Inference with Microscaling Format☆34Nov 12, 2024Updated last year
- Official PyTorch implementation of "LayerMerge: Neural Network Depth Compression through Layer Pruning and Merging" (ICML 2024)☆31Aug 15, 2024Updated last year
- Benchmarking Attention Mechanism in Vision Transformers.☆20Oct 10, 2022Updated 3 years ago
- ☆15Apr 25, 2023Updated 2 years ago
- [ICML 2024] BiLLM: Pushing the Limit of Post-Training Quantization for LLMs☆229Jan 11, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Github Repo for OATS: Outlier-Aware Pruning through Sparse and Low Rank Decomposition☆19Apr 16, 2025Updated 11 months ago
- ☆35May 24, 2024Updated last year
- ☆30Dec 14, 2025Updated 4 months ago
- [NeurIPS 2025] Official code for paper: Beyond Attention or Similarity: Maximizing Conditional Diversity for Token Pruning in MLLMs.☆95Sep 20, 2025Updated 6 months ago
- ☆243Nov 9, 2022Updated 3 years ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆73Jan 6, 2024Updated 2 years ago
- Accelerator Zoo☆20Oct 14, 2025Updated 6 months ago
- verilog实现systolic array及配套IO☆12Dec 2, 2024Updated last year
- ☆19Mar 13, 2023Updated 3 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- EEZ Studio 中文版☆13Jun 20, 2025Updated 9 months ago
- A simple cycle-accurate DaDianNao simulator☆13Mar 27, 2019Updated 7 years ago
- [ICLR 2024] Jaiswal, A., Gan, Z., Du, X., Zhang, B., Wang, Z., & Yang, Y. Compressing llms: The truth is rarely pure and never simple.☆27Apr 21, 2025Updated 11 months ago
- [ICLR 2026] FastCar☆16May 22, 2025Updated 10 months ago
- Artifact for "DX100: A Programmable Data Access Accelerator for Indirection (ISCA 2025)" paper☆17Nov 6, 2025Updated 5 months ago
- Official implementation of the ICLR paper "Streamlining Redundant Layers to Compress Large Language Models"☆41May 1, 2025Updated 11 months ago
- ☆49Apr 22, 2021Updated 4 years ago