This repository contains integer operators on GPUs for PyTorch.
☆236Sep 29, 2023Updated 2 years ago
Alternatives and similar repositories for torch-int
Users that are interested in torch-int are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,637Jul 12, 2024Updated last year
- Reorder-based post-training quantization for large language model☆199May 17, 2023Updated 2 years ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆826Mar 6, 2025Updated last year
- ☆162Sep 15, 2023Updated 2 years ago
- [CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric☆61Mar 23, 2023Updated 3 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆49Oct 5, 2022Updated 3 years ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,292Mar 27, 2024Updated 2 years ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,503Jul 17, 2025Updated 9 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,057Sep 4, 2024Updated last year
- Post-Training Quantization for Vision transformers.☆242Jul 19, 2022Updated 3 years ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆337Jul 2, 2024Updated last year
- [ICML'21 Oral] I-BERT: Integer-only BERT Quantization☆267Jan 29, 2023Updated 3 years ago
- Model Compression Toolbox for Large Language Models and Diffusion Models☆774Aug 14, 2025Updated 8 months ago
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆51Oct 21, 2023Updated 2 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- [ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.☆371Mar 21, 2024Updated 2 years ago
- Pytorch implementation of BRECQ, ICLR 2021☆296Aug 1, 2021Updated 4 years ago
- Implementation of Post-training Quantization on Diffusion Models (CVPR 2023)☆143Apr 1, 2023Updated 3 years ago
- ☆172Mar 9, 2023Updated 3 years ago
- [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"☆39Aug 20, 2024Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆82Aug 12, 2024Updated last year
- Training Quantized Neural Networks with a Full-precision Auxiliary Module☆13Jun 19, 2020Updated 5 years ago
- Repository for the QUIK project, enabling the use of 4bit kernels for generative inference - EMNLP 2024☆186Apr 16, 2024Updated 2 years ago
- SQuant [ICLR22]☆131Sep 27, 2022Updated 3 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- The official implementation of PTQD: Accurate Post-Training Quantization for Diffusion Models☆103Mar 12, 2024Updated 2 years ago
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆503Nov 26, 2024Updated last year
- BitSplit Post-trining Quantization☆50Dec 20, 2021Updated 4 years ago
- Transformer related optimization, including BERT, GPT☆6,412Mar 27, 2024Updated 2 years ago
- Codes for Accepted Paper : "MetaQuant: Learning to Quantize by Learning to Penetrate Non-differentiable Quantization" in NeurIPS 2019☆54May 8, 2020Updated 5 years ago
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆892Nov 26, 2025Updated 4 months ago
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆717Aug 13, 2024Updated last year
- Model Quantization Benchmark☆865Apr 20, 2025Updated 11 months ago
- ☆261Jul 11, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- [IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer☆360Apr 11, 2023Updated 3 years ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆144Mar 31, 2023Updated 3 years ago
- ☆120Nov 17, 2023Updated 2 years ago
- ☆15Mar 21, 2025Updated last year
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆324Mar 4, 2025Updated last year
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,323May 11, 2025Updated 11 months ago
- Official Code For Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM☆14Dec 27, 2023Updated 2 years ago