intel / neural-speedLinks
An innovative library for efficient LLM inference via low-bit quantization
☆349Updated 9 months ago
Alternatives and similar repositories for neural-speed
Users that are interested in neural-speed are comparing it to the libraries listed below
Sorting:
- Advanced Quantization Algorithm for LLMs and VLMs, with support for CPU, Intel GPU, CUDA and HPU. Seamlessly integrated with Torchao, Tra…☆525Updated this week
- Official implementation of Half-Quadratic Quantization (HQQ)☆832Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆264Updated 8 months ago
- Easy and Efficient Quantization for Transformers☆199Updated 4 months ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆304Updated 3 weeks ago
- ☆541Updated 7 months ago
- For releasing code related to compression methods for transformers, accompanying our publications☆431Updated 5 months ago
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆172Updated 2 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆126Updated this week
- ☆194Updated last month
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 8 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆846Updated 9 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆87Updated this week
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆188Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆473Updated this week
- ☆55Updated 9 months ago
- GPTQ inference Triton kernel☆302Updated 2 years ago
- Comparison of Language Model Inference Engines☆217Updated 6 months ago
- ☆118Updated last year
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆643Updated last month
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆370Updated last year
- ☆213Updated 5 months ago
- scalable and robust tree-based speculative decoding algorithm☆348Updated 4 months ago
- Python bindings for ggml☆141Updated 9 months ago
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆137Updated 10 months ago
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆213Updated 7 months ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆359Updated 10 months ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆238Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆705Updated 3 months ago
- A throughput-oriented high-performance serving framework for LLMs☆825Updated 2 weeks ago