vsingh-group / FrameQuantLinks
☆10Updated last year
Alternatives and similar repositories for FrameQuant
Users that are interested in FrameQuant are comparing it to the libraries listed below
Sorting:
- LLM Inference with Microscaling Format☆34Updated last year
- ☆85Updated last year
- This repository contains the training code of ParetoQ introduced in our work "ParetoQ Scaling Laws in Extremely Low-bit LLM Quantization"☆118Updated 3 months ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆120Updated last year
- Fast Hadamard transform in CUDA, with a PyTorch interface☆281Updated 3 months ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆68Updated last year
- ☆60Updated last year
- ☆31Updated last year
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆123Updated 7 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆165Updated 2 months ago
- ☆44Updated 2 years ago
- The official repository of Quamba1 [ICLR 2025] & Quamba2 [ICML 2025]☆66Updated 7 months ago
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆49Updated 3 years ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆93Updated last year
- ☆30Updated last year
- ☆40Updated last year
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆56Updated last year
- Explore training for quantized models☆26Updated 6 months ago
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆113Updated 2 years ago
- ☆21Updated 2 years ago
- Official implementation for Training LLMs with MXFP4☆118Updated 9 months ago
- ☆83Updated last year
- ☆160Updated 2 years ago
- ☆77Updated last year
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆127Updated last year
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆134Updated last year
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆172Updated 2 months ago
- ☆208Updated 4 years ago
- ☆170Updated 2 years ago
- Framework to reduce autotune overhead to zero for well known deployments.☆95Updated 4 months ago