QuIP quantization
☆64Mar 17, 2024Updated 2 years ago
Alternatives and similar repositories for QuIP-for-all
Users that are interested in QuIP-for-all are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- EXL2 quantization generalized to other models.☆10Mar 17, 2024Updated 2 years ago
- ☆586Oct 29, 2024Updated last year
- ☆167Jun 22, 2025Updated 9 months ago
- Pytorch implementation of our paper accepted by ICML 2023 -- "Bi-directional Masks for Efficient N:M Sparse Training"☆13Jun 7, 2023Updated 2 years ago
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆498Nov 26, 2024Updated last year
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- A tool for model sparse based on torch.fx☆13Jun 3, 2024Updated last year
- llama.cpp to PyTorch Converter☆36Apr 8, 2024Updated 2 years ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆174Nov 11, 2025Updated 4 months ago
- An unsupervised model merging algorithm for Transformers-based language models.☆108Apr 29, 2024Updated last year
- ☆74Jun 20, 2025Updated 9 months ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆396Feb 24, 2024Updated 2 years ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆304Mar 10, 2026Updated 3 weeks ago
- [ICLR 2026] ParoQuant: Pairwise Rotation Quantization for Efficient Reasoning LLM Inference☆176Updated this week
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆154Aug 21, 2025Updated 7 months ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- ☆21Feb 5, 2024Updated 2 years ago
- An OpenAI API compatible LLM inference server based on ExLlamaV2.☆25Feb 9, 2024Updated 2 years ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆336Nov 26, 2025Updated 4 months ago
- Learning Accurate Decision Trees with Bandit Feedback via Quantized Gradient Descent☆16Sep 8, 2022Updated 3 years ago
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆187Mar 23, 2026Updated 2 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆132Jun 25, 2024Updated last year
- [ACL 24 Findings] Implementation of Resonance RoPE and the PosGen synthetic dataset.☆24Mar 5, 2024Updated 2 years ago
- ☆87Jan 23, 2025Updated last year
- Official implementation for Training LLMs with MXFP4☆123Apr 25, 2025Updated 11 months ago
- Open source password manager - Proton Pass • AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- Using modal.com to process FineWeb-edu data☆20Updated this week
- Measuring the Signal to Noise Ratio in Language Model Evaluation☆29Aug 19, 2025Updated 7 months ago
- ☆18Jul 3, 2025Updated 9 months ago
- Reorder-based post-training quantization for large language model☆199May 17, 2023Updated 2 years ago
- A community list of common phrases generated by GPT and Claude models☆81Nov 19, 2023Updated 2 years ago
- ☆120Mar 18, 2026Updated 3 weeks ago
- Official PyTorch implementation of "GuidedQuant: Large Language Model Quantization via Exploiting End Loss Guidance" (ICML 2025)☆51Jul 6, 2025Updated 9 months ago
- Prompt Jinja2 templates for LLMs☆35Jul 9, 2025Updated 9 months ago
- LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD ROCm, Intel XPU and Intel/AMD/Apple CPU vi…☆1,085Updated this week
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- Official implementation of Half-Quadratic Quantization (HQQ)☆925Feb 26, 2026Updated last month
- ☆27Nov 13, 2025Updated 4 months ago
- LLM that can be trained on 1 or more GPUs for research.☆37Updated this week
- The official implementation of the ICML 2023 paper OFQ-ViT☆39Oct 3, 2023Updated 2 years ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28May 4, 2025Updated 11 months ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆390Apr 13, 2025Updated 11 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆181May 2, 2024Updated last year