Guangxuan-Xiao / torch-intView external linksLinks
This repository contains integer operators on GPUs for PyTorch.
☆237Sep 29, 2023Updated 2 years ago
Alternatives and similar repositories for torch-int
Users that are interested in torch-int are comparing it to the libraries listed below
Sorting:
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,609Jul 12, 2024Updated last year
- Reorder-based post-training quantization for large language model☆198May 17, 2023Updated 2 years ago
- ☆160Sep 15, 2023Updated 2 years ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆812Mar 6, 2025Updated 11 months ago
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆49Oct 5, 2022Updated 3 years ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,256Mar 27, 2024Updated last year
- [CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric☆60Mar 23, 2023Updated 2 years ago
- ☆169Mar 9, 2023Updated 2 years ago
- [ICML'21 Oral] I-BERT: Integer-only BERT Quantization☆265Jan 29, 2023Updated 3 years ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,438Jul 17, 2025Updated 7 months ago
- Post-Training Quantization for Vision transformers.☆238Jul 19, 2022Updated 3 years ago
- Model Compression Toolbox for Large Language Models and Diffusion Models☆753Aug 14, 2025Updated 6 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,018Sep 4, 2024Updated last year
- GPTQ inference TVM kernel☆40Apr 25, 2024Updated last year
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- [ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.☆370Mar 21, 2024Updated last year
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆50Oct 21, 2023Updated 2 years ago
- Implementation of Post-training Quantization on Diffusion Models (CVPR 2023)☆141Apr 1, 2023Updated 2 years ago
- Pytorch implementation of BRECQ, ICLR 2021☆289Aug 1, 2021Updated 4 years ago
- [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"☆37Aug 20, 2024Updated last year
- An external memory allocator example for PyTorch.☆16Aug 10, 2025Updated 6 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆79Aug 12, 2024Updated last year
- Repository for the QUIK project, enabling the use of 4bit kernels for generative inference - EMNLP 2024☆184Apr 16, 2024Updated last year
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆887Nov 26, 2025Updated 2 months ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆142Mar 31, 2023Updated 2 years ago
- ☆31May 29, 2025Updated 8 months ago
- ☆113Nov 17, 2023Updated 2 years ago
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆321Mar 4, 2025Updated 11 months ago
- Model Quantization Benchmark☆858Apr 20, 2025Updated 9 months ago
- ☆261Jul 11, 2024Updated last year
- BitSplit Post-trining Quantization☆50Dec 20, 2021Updated 4 years ago
- GPTQ inference Triton kernel☆321May 18, 2023Updated 2 years ago
- Training Quantized Neural Networks with a Full-precision Auxiliary Module☆13Jun 19, 2020Updated 5 years ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆11Dec 13, 2023Updated 2 years ago
- Codes for Accepted Paper : "MetaQuant: Learning to Quantize by Learning to Penetrate Non-differentiable Quantization" in NeurIPS 2019☆54May 8, 2020Updated 5 years ago
- Post-training sparsity-aware quantization☆34Feb 26, 2023Updated 2 years ago
- PyTorch implementation of "Deep Transferring Quantization" (ECCV2020)☆18Jun 22, 2022Updated 3 years ago
- Transformer related optimization, including BERT, GPT☆6,392Mar 27, 2024Updated last year
- [IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer☆360Apr 11, 2023Updated 2 years ago