lightmatter-ai / INT-FP-QSimLinks
Flexible simulator for mixed precision and format simulation of LLMs and vision transformers.
☆50Updated last year
Alternatives and similar repositories for INT-FP-QSim
Users that are interested in INT-FP-QSim are comparing it to the libraries listed below
Sorting:
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆47Updated 2 years ago
- ☆31Updated last year
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆109Updated 8 months ago
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆46Updated last year
- ACL 2023☆39Updated 2 years ago
- Code Repository of Evaluating Quantized Large Language Models☆124Updated 9 months ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆63Updated last year
- ☆59Updated last year
- LLM Inference with Microscaling Format☆23Updated 7 months ago
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆110Updated 6 months ago
- AFPQ code implementation☆21Updated last year
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆38Updated 10 months ago
- This repository contains integer operators on GPUs for PyTorch.☆205Updated last year
- ☆51Updated 11 months ago
- A collection of research papers on efficient training of DNNs☆70Updated 2 years ago
- ☆60Updated last week
- ☆147Updated 11 months ago
- ☆20Updated last year
- GPU operators for sparse tensor operations☆33Updated last year
- ☆15Updated 2 months ago
- Code for ICML 2021 submission☆34Updated 4 years ago
- ☆10Updated 3 years ago
- ☆69Updated 7 months ago
- Reorder-based post-training quantization for large language model☆191Updated 2 years ago
- ☆75Updated 5 months ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆108Updated 2 months ago
- QAQ: Quality Adaptive Quantization for LLM KV Cache☆51Updated last year
- ☆151Updated 2 years ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆310Updated 11 months ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago