microsoft / batch-inference
Dynamic batching library for Deep Learning inference. Tutorials for LLM, GPT scenarios.
☆94Updated 7 months ago
Alternatives and similar repositories for batch-inference:
Users that are interested in batch-inference are comparing it to the libraries listed below
- ☆116Updated last year
- Easy and Efficient Quantization for Transformers☆193Updated last month
- experiments with inference on llama☆104Updated 9 months ago
- Benchmark suite for LLMs from Fireworks.ai☆69Updated last month
- ☆237Updated last week
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆162Updated 2 weeks ago
- OpenAI compatible API for TensorRT LLM triton backend☆201Updated 7 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆262Updated 5 months ago
- ☆180Updated 5 months ago
- The Triton backend for the ONNX Runtime.☆140Updated last week
- ☆48Updated 4 months ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆289Updated last month
- Comparison of Language Model Inference Engines☆208Updated 3 months ago
- GPTQ inference Triton kernel☆297Updated last year
- The Triton backend for the PyTorch TorchScript models.☆144Updated last week
- ☆117Updated 10 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆109Updated 3 months ago
- vLLM performance dashboard☆23Updated 10 months ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆177Updated this week
- This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/…☆95Updated last year
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆90Updated last year
- Code for KaLM-Embedding models☆74Updated this week
- ☆54Updated 6 months ago
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆224Updated this week
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆336Updated 7 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆87Updated this week
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆272Updated last year
- A low-latency & high-throughput serving engine for LLMs☆325Updated last month
- Manage scalable open LLM inference endpoints in Slurm clusters☆253Updated 8 months ago