lapp0 / lm-inference-engines
Comparison of Language Model Inference Engines
☆208Updated 3 months ago
Alternatives and similar repositories for lm-inference-engines:
Users that are interested in lm-inference-engines are comparing it to the libraries listed below
- ☆238Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆262Updated 5 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆771Updated 6 months ago
- ☆180Updated 5 months ago
- OpenAI compatible API for TensorRT LLM triton backend☆201Updated 7 months ago
- A throughput-oriented high-performance serving framework for LLMs☆773Updated 6 months ago
- Easy and Efficient Quantization for Transformers☆193Updated last month
- A bagel, with everything.☆317Updated 11 months ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆336Updated 7 months ago
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆224Updated this week
- ☆512Updated 7 months ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆289Updated last month
- ☆49Updated 4 months ago
- ☆449Updated last year
- An innovative library for efficient LLM inference via low-bit quantization☆351Updated 6 months ago
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆163Updated 2 weeks ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆770Updated this week
- ☆54Updated 6 months ago
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆1,103Updated this week
- LLMPerf is a library for validating and benchmarking LLMs☆826Updated 3 months ago
- GPTQ inference Triton kernel☆298Updated last year
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆136Updated 7 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆611Updated 2 weeks ago
- A collection of all available inference solutions for the LLMs☆81Updated 3 weeks ago
- ☆528Updated 4 months ago
- scalable and robust tree-based speculative decoding algorithm☆339Updated last month
- ☆116Updated 11 months ago
- Redis for LLMs☆624Updated this week
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆226Updated 11 months ago
- Experiments on speculative sampling with Llama models☆125Updated last year