This repo contains the code for studying the interplay between quantization and sparsity methods
☆26Feb 26, 2025Updated last year
Alternatives and similar repositories for quantization-sparsity-interplay
Users that are interested in quantization-sparsity-interplay are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆41Nov 22, 2025Updated 5 months ago
- ☆21Oct 2, 2024Updated last year
- ☆18Nov 11, 2024Updated last year
- The official implementation of the DAC 2024 paper GQA-LUT☆22Dec 20, 2024Updated last year
- ☆19Mar 21, 2023Updated 3 years ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- A bit-level sparsity-awared multiply-accumulate process element.☆19Jul 9, 2024Updated last year
- [NeurIPS 2024] AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models☆34Jun 9, 2025Updated 10 months ago
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆29Jul 24, 2025Updated 9 months ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆68Mar 27, 2025Updated last year
- [ICCV-2023] EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization☆28Dec 6, 2023Updated 2 years ago
- ☆120Nov 17, 2023Updated 2 years ago
- [ICLR2026] The first W4A4KV4 quantized + 50% sparse LLMs!☆26Jan 26, 2026Updated 3 months ago
- ☆22Oct 25, 2024Updated last year
- ☆30Jul 22, 2024Updated last year
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Official implementation of the ICLR'25 paper "QERA: an Analytical Framework for Quantization Error Reconstruction".☆14Feb 4, 2025Updated last year
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated 2 years ago
- [ICLR 2025] RaSA: Rank-Sharing Low-Rank Adaptation☆10May 19, 2025Updated 11 months ago
- LLM Inference with Microscaling Format☆34Nov 12, 2024Updated last year
- Official PyTorch implementation of "LayerMerge: Neural Network Depth Compression through Layer Pruning and Merging" (ICML 2024)☆31Apr 13, 2026Updated 3 weeks ago
- ☆15Apr 25, 2023Updated 3 years ago
- [ICML 2024] BiLLM: Pushing the Limit of Post-Training Quantization for LLMs☆229Jan 11, 2025Updated last year
- Official implementation for "Pruning Large Language Models with Semi-Structural Adaptive Sparse Training" (AAAI 2025)☆19Jul 1, 2025Updated 10 months ago
- ☆35May 24, 2024Updated last year
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- ☆152Jul 19, 2025Updated 9 months ago
- [NeurIPS 2025] Official code for paper: Beyond Attention or Similarity: Maximizing Conditional Diversity for Token Pruning in MLLMs.☆99Sep 20, 2025Updated 7 months ago
- ☆32Dec 14, 2025Updated 4 months ago
- Modern C++ network programming library for Linux.☆19Mar 20, 2021Updated 5 years ago
- Github Repo for OATS: Outlier-Aware Pruning through Sparse and Low Rank Decomposition☆21Apr 16, 2025Updated last year
- ☆243Nov 9, 2022Updated 3 years ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆75Jan 6, 2024Updated 2 years ago
- verilog实现systolic array及配套IO☆12Dec 2, 2024Updated last year
- 图书馆项目☆16Jun 17, 2022Updated 3 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- EEZ Studio 中文版☆13Jun 20, 2025Updated 10 months ago
- A simple cycle-accurate DaDianNao simulator☆13Mar 27, 2019Updated 7 years ago
- [ICLR 2024] Jaiswal, A., Gan, Z., Du, X., Zhang, B., Wang, Z., & Yang, Y. Compressing llms: The truth is rarely pure and never simple.☆27Apr 21, 2025Updated last year
- Artifact for "DX100: A Programmable Data Access Accelerator for Indirection (ISCA 2025)" paper☆17Nov 6, 2025Updated 5 months ago
- [ICLR 2026] FastCar☆16May 22, 2025Updated 11 months ago
- Official implementation of the ICLR paper "Streamlining Redundant Layers to Compress Large Language Models"☆42May 1, 2025Updated last year
- ☆49Apr 22, 2021Updated 5 years ago