☆121Jan 8, 2026Updated 2 months ago
Alternatives and similar repositories for Quartet
Users that are interested in Quartet are comparing it to the libraries listed below
Sorting:
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆171Nov 11, 2025Updated 4 months ago
- Official implementation for Training LLMs with MXFP4☆121Apr 25, 2025Updated 10 months ago
- ☆47May 20, 2025Updated 10 months ago
- ☆15Sep 22, 2024Updated last year
- [COLM 2025] Official PyTorch implementation of "Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models"☆72Jul 8, 2025Updated 8 months ago
- FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation☆51Aug 24, 2025Updated 6 months ago
- [NeurIPS 2025, Spotlight]: Ambient-o: Training Good models with Bad Data.☆33Jan 21, 2026Updated 2 months ago
- Row-wise block scaling for fp8 quantization matrix multiplication. Solution to GPU mode AMD challenge.☆18Feb 9, 2026Updated last month
- ☆101Feb 26, 2026Updated 3 weeks ago
- DeeperGEMM: crazy optimized version☆75May 5, 2025Updated 10 months ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆293Mar 10, 2026Updated last week
- Code for the paper “Four Over Six: More Accurate NVFP4 Quantization with Adaptive Block Scaling”☆138Mar 7, 2026Updated 2 weeks ago
- This repository contains the training code of ParetoQ introduced in our work "ParetoQ Scaling Laws in Extremely Low-bit LLM Quantization"☆119Oct 15, 2025Updated 5 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆276Jul 16, 2025Updated 8 months ago
- ☆24Jan 29, 2026Updated last month
- ☆12Jan 4, 2024Updated 2 years ago
- Pytorch implementation of "Oscillation-Reduced MXFP4 Training for Vision Transformers" on DeiT Model Pre-training☆37Jun 20, 2025Updated 9 months ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆389Apr 13, 2025Updated 11 months ago
- ☆16May 14, 2025Updated 10 months ago
- ☆580Oct 29, 2024Updated last year
- 🎯An accuracy-first, highly efficient quantization toolkit for LLMs, designed to minimize quality degradation across Weight-Only Quantiza…☆914Updated this week
- [NeurIPS 2023] Token-Scaled Logit Distillation for Ternary Weight Generative Language Models☆18Dec 6, 2023Updated 2 years ago
- ☆87Jan 23, 2025Updated last year
- 삼각형의 실전! Triton☆16Feb 15, 2024Updated 2 years ago
- ☆52May 19, 2025Updated 10 months ago
- A collection of GPU experiments and benchmarks for my personal understanding and research.☆26Updated this week
- Transformers components but in Triton☆34May 9, 2025Updated 10 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,041Sep 4, 2024Updated last year
- Implementation of PGONAS for CVPR22W and RD-NAS for ICASSP23☆23Apr 25, 2023Updated 2 years ago
- PyTorch bindings for CUTLASS grouped GEMM.☆186Feb 19, 2026Updated last month
- ☆15Jan 12, 2026Updated 2 months ago
- ☆27Feb 27, 2025Updated last year
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- AFPQ code implementation☆23Nov 6, 2023Updated 2 years ago
- My submission for the GPUMODE/AMD fp8 mm challenge☆29Jun 4, 2025Updated 9 months ago
- QuIP quantization☆62Mar 17, 2024Updated 2 years ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- ☆48Jan 18, 2024Updated 2 years ago
- ☆129Jan 22, 2024Updated 2 years ago