[ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
☆23Mar 15, 2024Updated 2 years ago
Alternatives and similar repositories for smoothquantplus
Users that are interested in smoothquantplus are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A high-throughput and memory-efficient inference and serving engine for LLMs☆12Nov 14, 2025Updated 5 months ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆11Dec 13, 2023Updated 2 years ago
- ☆30Jul 22, 2024Updated last year
- An easy-to-use package for implementing SmoothQuant for LLMs☆111Apr 7, 2025Updated last year
- ☆87Jan 23, 2025Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- QAQ: Quality Adaptive Quantization for LLM KV Cache☆53Mar 27, 2024Updated 2 years ago
- Zen-NAS, a lightning fast, training-free Neural Architecture Searching algorithm☆11Nov 12, 2021Updated 4 years ago
- Model optimizer used in Adlik.☆42May 23, 2023Updated 2 years ago
- ☆21Feb 5, 2024Updated 2 years ago
- ☆11Dec 26, 2025Updated 3 months ago
- ☆38Jun 14, 2023Updated 2 years ago
- [ICLRW'26] EoRA: Fine-tuning-free Compensation for Compressed LLM with Eigenspace Low-Rank Approximation☆33Mar 24, 2026Updated 3 weeks ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆170Nov 26, 2025Updated 4 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- The official implementation of the EMNLP 2023 paper LLM-FP4☆222Dec 15, 2023Updated 2 years ago
- TensorRT实现YOLOX部署☆13Apr 19, 2022Updated 3 years ago
- Repository for the QUIK project, enabling the use of 4bit kernels for generative inference - EMNLP 2024☆185Apr 16, 2024Updated last year
- ☆129Jan 22, 2024Updated 2 years ago
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆250Mar 15, 2024Updated 2 years ago
- ☆27Jul 30, 2024Updated last year
- Implemented a script that automatically adjusts Qwen3's inference and non-inference capabilities, based on an OpenAI-like API. The infere…☆22May 9, 2025Updated 11 months ago
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆49Oct 5, 2022Updated 3 years ago
- Open-source evaluation toolkit of large vision-language models (LVLMs), support ~100 VLMs, 30+ benchmarks☆15Feb 17, 2025Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆15Jun 26, 2024Updated last year
- ROS 2 Inference sample for using Rockchip NPU.☆17Feb 20, 2023Updated 3 years ago
- The reproduct of the paper - Aligner: Achieving Efficient Alignment through Weak-to-Strong Correction☆22May 29, 2024Updated last year
- LLM Quantization toolkit☆20Updated this week
- Official implementation of the ICLR 2024 paper AffineQuant☆29Mar 30, 2024Updated 2 years ago
- This repository contains the implementation of the paper: "Span Classification with Structured Information for Disfluency Detection in Sp…☆15Jun 6, 2023Updated 2 years ago
- ☆16Jan 14, 2025Updated last year
- Summary of some awesome work for optimizing LLM inference☆232Feb 14, 2026Updated 2 months ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆120Mar 6, 2024Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- patches for huggingface transformers to save memory☆36Jun 2, 2025Updated 10 months ago
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆188Mar 23, 2026Updated 3 weeks ago
- ☆29Feb 3, 2026Updated 2 months ago
- [NAACL 2025] MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM Finetuning☆19May 31, 2025Updated 10 months ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆69Mar 7, 2024Updated 2 years ago
- Implementation of the content-aware image resizing algorithm presented in the paper "Seam carving for content-aware image resizing"☆13Jul 22, 2019Updated 6 years ago
- Sequence-level 1F1B schedule for LLMs.☆19Jun 4, 2024Updated last year