Work in progress.
☆79Nov 25, 2025Updated 5 months ago
Alternatives and similar repositories for QuEST
Users that are interested in QuEST are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The official implementation of the DAC 2024 paper GQA-LUT☆22Dec 20, 2024Updated last year
- Code for data-aware compression of DeepSeek models☆72Dec 11, 2025Updated 4 months ago
- ☆16Sep 22, 2024Updated last year
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆39Sep 24, 2024Updated last year
- ☆36Mar 12, 2025Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- KV cache compression via sparse coding☆17Oct 26, 2025Updated 6 months ago
- ☆169Jun 22, 2025Updated 10 months ago
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆678Apr 25, 2025Updated last year
- ☆16Sep 27, 2023Updated 2 years ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆391Apr 13, 2025Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆95Sep 4, 2024Updated last year
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆171Nov 26, 2025Updated 5 months ago
- This repository contains the training code of ParetoQ introduced in our work "ParetoQ Scaling Laws in Extremely Low-bit LLM Quantization"☆123Oct 15, 2025Updated 6 months ago
- [ICLR25] STBLLM: Breaking the 1-Bit Barrier with Structured Binary LLMs☆20Jun 3, 2025Updated 10 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- ☆13Apr 1, 2026Updated 3 weeks ago
- Code repo for the paper "SpinQuant LLM quantization with learned rotations"☆390Feb 14, 2025Updated last year
- ☆49May 20, 2025Updated 11 months ago
- XLand-100B: A Large-Scale Multi-Task Dataset for In-Context Reinforcement Learning☆14Jun 19, 2024Updated last year
- ☆107Feb 26, 2026Updated 2 months ago
- [ICML 2024] Official Repository for the paper "Transformers Get Stable: An End-to-End Signal Propagation Theory for Language Models"☆10Jul 19, 2024Updated last year
- ☆53Jul 18, 2024Updated last year
- Pytorch distributed backend extension with compression support☆17Mar 24, 2025Updated last year
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆261Aug 9, 2025Updated 8 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Fast Hadamard transform in CUDA, with a PyTorch interface☆310Mar 10, 2026Updated last month
- ☆46Sep 27, 2025Updated 7 months ago
- The official implementation of the ICML 2023 paper OFQ-ViT☆39Oct 3, 2023Updated 2 years ago
- Quartet II Official Code☆69Mar 23, 2026Updated last month
- [NeurIPS 24 Spotlight] MaskLLM: Learnable Semi-structured Sparsity for Large Language Models☆187Jan 1, 2025Updated last year
- Official implementation of the paper "Linear Transformers with Learnable Kernel Functions are Better In-Context Models"☆169Jan 16, 2025Updated last year
- Official Repository for Task-Circuit Quantization☆25Jun 1, 2025Updated 10 months ago
- ☆71Aug 27, 2024Updated last year
- Simple reimplementation of Flow Matching for Generative Modeling (https://arxiv.org/abs/2210.02747) paper in PyTorch☆23Aug 10, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- ☆41Nov 22, 2025Updated 5 months ago
- [ICML 2025] SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models☆59Aug 9, 2024Updated last year
- High Performance FP8 GEMM Kernels for SM89 and later GPUs.☆21Jan 24, 2025Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆834Mar 6, 2025Updated last year
- ☆52Nov 5, 2024Updated last year
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆178Nov 11, 2025Updated 5 months ago
- ☆42Mar 28, 2024Updated 2 years ago