AlpinDale / QuIP-for-Llama
Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees" adapted for Llama models
☆36Updated last year
Alternatives and similar repositories for QuIP-for-Llama:
Users that are interested in QuIP-for-Llama are comparing it to the libraries listed below
- QuIP quantization☆52Updated last year
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.☆79Updated last week
- PB-LLM: Partially Binarized Large Language Models☆151Updated last year
- RWKV-7: Surpassing GPT☆81Updated 4 months ago
- ☆112Updated 2 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆272Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆362Updated last year
- ☆117Updated 10 months ago
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated last year
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆99Updated last year
- Reorder-based post-training quantization for large language model☆185Updated last year
- EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆255Updated 5 months ago
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆162Updated last week
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆111Updated last year
- ☆49Updated 4 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆68Updated 6 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆117Updated last year
- Experiments on speculative sampling with Llama models☆125Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 5 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆97Updated 5 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆87Updated this week
- ☆53Updated 9 months ago
- Repository for CPU Kernel Generation for LLM Inference☆25Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆109Updated 3 months ago
- 1.58-bit LLaMa model☆82Updated 11 months ago
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆279Updated 2 months ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆225Updated 10 months ago