lightmatter-ai / INT-FP-QSim
Flexible simulator for mixed precision and format simulation of LLMs and vision transformers.
☆47Updated last year
Alternatives and similar repositories for INT-FP-QSim:
Users that are interested in INT-FP-QSim are comparing it to the libraries listed below
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆48Updated 2 years ago
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆105Updated 2 months ago
- ☆23Updated 3 months ago
- ☆27Updated 10 months ago
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆102Updated 4 months ago
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆47Updated last year
- Open Source Projects from Pallas Lab☆20Updated 3 years ago
- LLM Inference with Microscaling Format☆19Updated 3 months ago
- ☆51Updated 10 months ago
- ACL 2023☆38Updated last year
- Code Repository of Evaluating Quantized Large Language Models☆116Updated 5 months ago
- GPU operators for sparse tensor operations☆30Updated 11 months ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆57Updated 11 months ago
- A collection of research papers on efficient training of DNNs☆70Updated 2 years ago
- ☆10Updated 3 years ago
- BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.☆50Updated 2 years ago
- ☆61Updated 3 weeks ago
- Repository for CPU Kernel Generation for LLM Inference☆25Updated last year
- Post-training sparsity-aware quantization☆34Updated last year
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆31Updated 4 months ago
- ☆136Updated last year
- AFPQ code implementation☆20Updated last year
- An algorithm for static activation quantization of LLMs☆115Updated 2 weeks ago
- ☆41Updated last week
- This repository contains integer operators on GPUs for PyTorch.☆191Updated last year
- ☆17Updated last year
- SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models☆26Updated 6 months ago
- QAQ: Quality Adaptive Quantization for LLM KV Cache☆46Updated 10 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆56Updated 3 months ago
- ☆90Updated last year