AlpinDale / QuIP-for-Llama
Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees" adapted for Llama models
☆36Updated last year
Alternatives and similar repositories for QuIP-for-Llama:
Users that are interested in QuIP-for-Llama are comparing it to the libraries listed below
- QuIP quantization☆48Updated 10 months ago
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.☆80Updated last week
- RWKV-7: Surpassing GPT☆76Updated 2 months ago
- ☆104Updated last month
- PB-LLM: Partially Binarized Large Language Models☆150Updated last year
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆111Updated last year
- Reorder-based post-training quantization for large language model☆184Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆360Updated 11 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆265Updated last year
- ☆117Updated 9 months ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆331Updated 6 months ago
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆158Updated this week
- EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆244Updated 4 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆115Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆64Updated 5 months ago
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated last year
- ☆43Updated 3 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆96Updated 4 months ago
- Repository for CPU Kernel Generation for LLM Inference☆25Updated last year
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆101Updated 4 months ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆99Updated last year
- ☆52Updated 8 months ago
- GPTQ inference Triton kernel☆295Updated last year
- KV cache compression for high-throughput LLM inference☆114Updated last week
- ☆44Updated 6 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆88Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆73Updated this week
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- Experiments on speculative sampling with Llama models☆124Updated last year
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆217Updated 9 months ago