QuIP quantization
☆64Mar 17, 2024Updated 2 years ago
Alternatives and similar repositories for QuIP-for-all
Users that are interested in QuIP-for-all are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- EXL2 quantization generalized to other models.☆10Mar 17, 2024Updated 2 years ago
- ☆590Oct 29, 2024Updated last year
- ☆169Jun 22, 2025Updated 10 months ago
- Code implementation of GPTAQ (https://arxiv.org/abs/2504.02692)☆89Jul 28, 2025Updated 9 months ago
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆505Nov 26, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- A tool for model sparse based on torch.fx☆13Jun 3, 2024Updated last year
- llama.cpp to PyTorch Converter☆37Apr 8, 2024Updated 2 years ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆178Nov 11, 2025Updated 5 months ago
- An unsupervised model merging algorithm for Transformers-based language models.☆108Apr 29, 2024Updated 2 years ago
- ☆74Jun 20, 2025Updated 10 months ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆397Feb 24, 2024Updated 2 years ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆307Mar 10, 2026Updated last month
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,318Feb 26, 2026Updated 2 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆154Aug 21, 2025Updated 8 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- ☆21Feb 5, 2024Updated 2 years ago
- An OpenAI API compatible LLM inference server based on ExLlamaV2.☆25Feb 9, 2024Updated 2 years ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆337Apr 10, 2026Updated 2 weeks ago
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆190Mar 23, 2026Updated last month
- [ICLR 2026] ParoQuant: Pairwise Rotation Quantization for Efficient Reasoning LLM Inference☆227Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆132Jun 25, 2024Updated last year
- [ACL 24 Findings] Implementation of Resonance RoPE and the PosGen synthetic dataset.☆24Mar 5, 2024Updated 2 years ago
- ☆87Jan 23, 2025Updated last year
- Official implementation for Training LLMs with MXFP4☆124Apr 25, 2025Updated last year
- Serverless GPU API endpoints on Runpod - Get Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Using modal.com to process FineWeb-edu data☆20Apr 11, 2026Updated 2 weeks ago
- Measuring the Signal to Noise Ratio in Language Model Evaluation☆29Aug 19, 2025Updated 8 months ago
- ☆18Jul 3, 2025Updated 9 months ago
- Reorder-based post-training quantization for large language model☆199May 17, 2023Updated 2 years ago
- ☆120Mar 18, 2026Updated last month
- A community list of common phrases generated by GPT and Claude models☆81Nov 19, 2023Updated 2 years ago
- An implementation of LLMzip using GPT-2☆14Aug 7, 2023Updated 2 years ago
- Prompt Jinja2 templates for LLMs☆35Jul 9, 2025Updated 9 months ago
- Official PyTorch implementation of "GuidedQuant: Large Language Model Quantization via Exploiting End Loss Guidance" (ICML 2025)☆51Apr 13, 2026Updated 2 weeks ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Extract a single expert from a Mixture Of Experts model using slerp interpolation.☆19May 26, 2024Updated last year
- Official implementation of Half-Quadratic Quantization (HQQ)☆931Feb 26, 2026Updated 2 months ago
- LLM model quantization (compression) toolkit with HW acceleration support for Nvidia, AMD, Intel GPU and Intel/AMD/Apple CPU via HF, vLLM…☆1,121Updated this week
- ☆27Nov 13, 2025Updated 5 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28May 4, 2025Updated 11 months ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆391Apr 13, 2025Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆181May 2, 2024Updated last year