intel / neural-speed
An innovative library for efficient LLM inference via low-bit quantization
☆350Updated 8 months ago
Alternatives and similar repositories for neural-speed
Users that are interested in neural-speed are comparing it to the libraries listed below
Sorting:
- Official implementation of Half-Quadratic Quantization (HQQ)☆807Updated this week
- Advanced Quantization Algorithm for LLMs/VLMs.☆454Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆263Updated 7 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆818Updated 8 months ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆300Updated this week
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆186Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆462Updated this week
- ☆532Updated 6 months ago
- For releasing code related to compression methods for transformers, accompanying our publications☆427Updated 3 months ago
- OpenAI compatible API for TensorRT LLM triton backend☆205Updated 9 months ago
- Comparison of Language Model Inference Engines☆217Updated 4 months ago
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆168Updated last month
- A throughput-oriented high-performance serving framework for LLMs☆805Updated last week
- Easy and Efficient Quantization for Transformers☆197Updated 3 months ago
- ☆188Updated last week
- ☆253Updated this week
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 6 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆109Updated this week
- Efficient LLM Inference over Long Sequences☆373Updated 2 weeks ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆274Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆663Updated 2 months ago
- Production ready LLM model compression/quantization toolkit with hw accelerated inference support for both cpu/gpu via HF, vLLM, and SGLa…☆537Updated this week
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆632Updated 2 weeks ago
- GPTQ inference Triton kernel☆298Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆362Updated last year
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆348Updated 9 months ago
- EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆266Updated 7 months ago
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆1,316Updated this week
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆687Updated 9 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆86Updated this week