neuralmagic / nm-vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
☆250Updated this week
Related projects: ⓘ
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆407Updated this week
- Easy and Efficient Quantization for Transformers☆172Updated 2 months ago
- Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for t…☆200Updated this week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆560Updated 2 weeks ago
- ☆145Updated last month
- KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆281Updated last month
- A collection of all available inference solutions for the LLMs☆65Updated 2 weeks ago
- For releasing code related to compression methods for transformers, accompanying our publications☆356Updated 2 weeks ago
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ, and export to onnx/onnx-runtime easily.☆141Updated 3 weeks ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆339Updated 6 months ago
- A throughput-oriented high-performance serving framework for LLMs☆470Updated this week
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆231Updated this week
- QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving☆399Updated 2 weeks ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆258Updated 10 months ago
- GPTQ inference Triton kernel☆273Updated last year
- An innovative library for efficient LLM inference via low-bit quantization☆342Updated 2 weeks ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆144Updated this week
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆156Updated this week
- ☆478Updated 2 weeks ago
- scalable and robust tree-based speculative decoding algorithm☆298Updated last month
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆144Updated this week
- Comparison of Language Model Inference Engines☆178Updated 2 weeks ago
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆629Updated last month
- Fast Inference of MoE Models with CPU-GPU Orchestration☆163Updated 3 months ago
- Code for QuaRot, an end-to-end 4-bit inference of large language models.☆254Updated last month
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆153Updated 2 months ago
- A family of compressed models obtained via pruning and knowledge distillation☆241Updated 3 weeks ago
- Applied AI experiments and examples for PyTorch