Qualcomm-AI-research / FP8-quantizationLinks
☆154Updated 2 years ago
Alternatives and similar repositories for FP8-quantization
Users that are interested in FP8-quantization are comparing it to the libraries listed below
Sorting:
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆110Updated 8 months ago
- ☆206Updated 3 years ago
- PyTorch emulation library for Microscaling (MX)-compatible data formats☆262Updated last month
- This repository contains integer operators on GPUs for PyTorch.☆208Updated last year
- ☆236Updated 2 years ago
- ☆64Updated last year
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆47Updated 2 years ago
- ☆76Updated 3 years ago
- llama INT4 cuda inference with AWQ☆54Updated 6 months ago
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆46Updated last year
- ☆22Updated last year
- Fast Hadamard transform in CUDA, with a PyTorch interface☆213Updated last year
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆125Updated 2 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆99Updated 4 years ago
- ☆158Updated last year
- ☆150Updated last year
- BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.☆56Updated 2 years ago
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆151Updated 2 weeks ago
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization☆95Updated 3 years ago
- ☆79Updated 6 months ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆64Updated last year
- Code implementation of GPTAQ (https://arxiv.org/abs/2504.02692)☆55Updated last week
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆113Updated last year
- ☆51Updated last year
- A collection of research papers on efficient training of DNNs☆70Updated 3 years ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆137Updated 2 years ago
- Code Repository of Evaluating Quantized Large Language Models☆129Updated 10 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆94Updated 3 weeks ago
- The official PyTorch implementation of the ICLR2022 paper, QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quan…☆123Updated 2 years ago
- ☆19Updated 3 years ago