Cornell-RelaxML / quip-sharp
☆517Updated 3 months ago
Alternatives and similar repositories for quip-sharp:
Users that are interested in quip-sharp are comparing it to the libraries listed below
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆358Updated 11 months ago
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆669Updated 5 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆689Updated 4 months ago
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆757Updated 3 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆737Updated 2 weeks ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆263Updated last year
- For releasing code related to compression methods for transformers, accompanying our publications☆405Updated last week
- GPTQ inference Triton kernel☆292Updated last year
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆321Updated 2 months ago
- scalable and robust tree-based speculative decoding algorithm☆331Updated this week
- Official PyTorch implementation of QA-LoRA☆122Updated 10 months ago
- Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".☆762Updated 5 months ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆327Updated 5 months ago
- QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving☆492Updated this week
- ☆538Updated last month
- Code repo for the paper "SpinQuant LLM quantization with learned rotations"☆207Updated 2 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 3 months ago
- EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆241Updated 3 months ago
- Official Implementation of EAGLE-1 (ICML'24) and EAGLE-2 (EMNLP'24)☆928Updated 3 weeks ago
- Reorder-based post-training quantization for large language model☆184Updated last year
- A simple and effective LLM pruning approach.☆707Updated 5 months ago
- ☆497Updated 5 months ago
- Production ready LLM model compression/quantization toolkit with accelerated inference support for both cpu/gpu via HF, vLLM, and SGLang.☆229Updated this week
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,183Updated 3 months ago
- [ICML 2024] BiLLM: Pushing the Limit of Post-Training Quantization for LLMs☆206Updated 2 weeks ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆581Updated 10 months ago
- [ICML 2024] CLLMs: Consistency Large Language Models☆368Updated 2 months ago
- Advanced Quantization Algorithm for LLMs/VLMs.☆362Updated this week
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆212Updated 9 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆190Updated 6 months ago